From patchwork Wed Nov 29 21:25:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arseniy Krasnov X-Patchwork-Id: 13473501 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=salutedevices.com header.i=@salutedevices.com header.b="Dy1NAuz9" Received: from mx1.sberdevices.ru (mx1.sberdevices.ru [37.18.73.165]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D206510C9; Wed, 29 Nov 2023 13:33:30 -0800 (PST) Received: from p-infra-ksmg-sc-msk01 (localhost [127.0.0.1]) by mx1.sberdevices.ru (Postfix) with ESMTP id 38918100008; Thu, 30 Nov 2023 00:33:29 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.sberdevices.ru 38918100008 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=salutedevices.com; s=mail; t=1701293609; bh=CoDD6lG2hh0XsjKEO/3BM7xO4aziss8HS1dutYHbzIw=; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type:From; b=Dy1NAuz9zl7e5JM3xKClCo5kIv3+z8N/2x6qhk82nIOI7p0upg+8O9KdUqiZz6TTD QooQdfn/uEbzkITE9NgDbGLT67uA4PuMYde8kxBhSkl+Re0S2RxrRG9VR+LAjvEZ9A gF44PfljhGKS2EdASYJAjirF+2PnILXGjBhLHfZ4I/U1N6mBhcFcDFSDRqUirDBdFY 90Y96zHI9qLLPXbFPCBWtXWsphzA8B9n7O9d2Kve3h88woIh9XIK8mK2aSH5g/lhiH mc/7wAA5VZfwXfKsL4yLxeIoZdYoLTkr+IN/FkwknD1gUamTWh2gXgwVO8six9zAfx WJpM+nAaaVMig== Received: from p-i-exch-sc-m01.sberdevices.ru (p-i-exch-sc-m01.sberdevices.ru [172.16.192.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.sberdevices.ru (Postfix) with ESMTPS; Thu, 30 Nov 2023 00:33:29 +0300 (MSK) Received: from localhost.localdomain (100.64.160.123) by p-i-exch-sc-m01.sberdevices.ru (172.16.192.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Thu, 30 Nov 2023 00:33:28 +0300 From: Arseniy Krasnov To: Stefan Hajnoczi , Stefano Garzarella , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Bobby Eshleman CC: , , , , , , Subject: [RFC PATCH v4 1/3] vsock: update SO_RCVLOWAT setting callback Date: Thu, 30 Nov 2023 00:25:17 +0300 Message-ID: <20231129212519.2938875-2-avkrasnov@salutedevices.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20231129212519.2938875-1-avkrasnov@salutedevices.com> References: <20231129212519.2938875-1-avkrasnov@salutedevices.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: p-i-exch-sc-m01.sberdevices.ru (172.16.192.107) To p-i-exch-sc-m01.sberdevices.ru (172.16.192.107) X-KSMG-Rule-ID: 10 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Lua-Profiles: 181714 [Nov 29 2023] X-KSMG-AntiSpam-Version: 6.0.0.2 X-KSMG-AntiSpam-Envelope-From: avkrasnov@salutedevices.com X-KSMG-AntiSpam-Rate: 0 X-KSMG-AntiSpam-Status: not_detected X-KSMG-AntiSpam-Method: none X-KSMG-AntiSpam-Auth: dkim=none X-KSMG-AntiSpam-Info: LuaCore: 5 0.3.5 98d108ddd984cca1d7e65e595eac546a62b0144b, {Tracking_from_domain_doesnt_match_to}, 100.64.160.123:7.1.2;p-i-exch-sc-m01.sberdevices.ru:5.0.1,7.1.1;d41d8cd98f00b204e9800998ecf8427e.com:7.1.1;127.0.0.199:7.1.2;salutedevices.com:7.1.1, FromAlignment: s, ApMailHostAddress: 100.64.160.123 X-MS-Exchange-Organization-SCL: -1 X-KSMG-AntiSpam-Interceptor-Info: scan successful X-KSMG-AntiPhishing: Clean X-KSMG-LinksScanning: Clean X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 2.0.1.6960, bases: 2023/11/29 19:17:00 #22574327 X-KSMG-AntiVirus-Status: Clean, skipped X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Do not return if transport callback for SO_RCVLOWAT is set (only in error case). In this case we don't need to set 'sk_rcvlowat' field in each transport - only in 'vsock_set_rcvlowat()'. Also, if 'sk_rcvlowat' is now set only in af_vsock.c, change callback name from 'set_rcvlowat' to 'notify_set_rcvlowat'. Signed-off-by: Arseniy Krasnov Reviewed-by: Stefano Garzarella --- Changelog: v3 -> v4: * Rename 'set_rcvlowat' to 'notify_set_rcvlowat'. * Commit message updated. include/net/af_vsock.h | 2 +- net/vmw_vsock/af_vsock.c | 9 +++++++-- net/vmw_vsock/hyperv_transport.c | 4 ++-- 3 files changed, 10 insertions(+), 5 deletions(-) diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h index e302c0e804d0..535701efc1e5 100644 --- a/include/net/af_vsock.h +++ b/include/net/af_vsock.h @@ -137,7 +137,6 @@ struct vsock_transport { u64 (*stream_rcvhiwat)(struct vsock_sock *); bool (*stream_is_active)(struct vsock_sock *); bool (*stream_allow)(u32 cid, u32 port); - int (*set_rcvlowat)(struct vsock_sock *vsk, int val); /* SEQ_PACKET. */ ssize_t (*seqpacket_dequeue)(struct vsock_sock *vsk, struct msghdr *msg, @@ -168,6 +167,7 @@ struct vsock_transport { struct vsock_transport_send_notify_data *); /* sk_lock held by the caller */ void (*notify_buffer_size)(struct vsock_sock *, u64 *); + int (*notify_set_rcvlowat)(struct vsock_sock *vsk, int val); /* Shutdown. */ int (*shutdown)(struct vsock_sock *, int); diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index 816725af281f..54ba7316f808 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -2264,8 +2264,13 @@ static int vsock_set_rcvlowat(struct sock *sk, int val) transport = vsk->transport; - if (transport && transport->set_rcvlowat) - return transport->set_rcvlowat(vsk, val); + if (transport && transport->notify_set_rcvlowat) { + int err; + + err = transport->notify_set_rcvlowat(vsk, val); + if (err) + return err; + } WRITE_ONCE(sk->sk_rcvlowat, val ? : 1); return 0; diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c index 7cb1a9d2cdb4..e2157e387217 100644 --- a/net/vmw_vsock/hyperv_transport.c +++ b/net/vmw_vsock/hyperv_transport.c @@ -816,7 +816,7 @@ int hvs_notify_send_post_enqueue(struct vsock_sock *vsk, ssize_t written, } static -int hvs_set_rcvlowat(struct vsock_sock *vsk, int val) +int hvs_notify_set_rcvlowat(struct vsock_sock *vsk, int val) { return -EOPNOTSUPP; } @@ -856,7 +856,7 @@ static struct vsock_transport hvs_transport = { .notify_send_pre_enqueue = hvs_notify_send_pre_enqueue, .notify_send_post_enqueue = hvs_notify_send_post_enqueue, - .set_rcvlowat = hvs_set_rcvlowat + .notify_set_rcvlowat = hvs_notify_set_rcvlowat }; static bool hvs_check_transport(struct vsock_sock *vsk) From patchwork Wed Nov 29 21:25:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arseniy Krasnov X-Patchwork-Id: 13473502 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=salutedevices.com header.i=@salutedevices.com header.b="owV1vxrK" Received: from mx1.sberdevices.ru (mx2.sberdevices.ru [45.89.224.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D312510CB; Wed, 29 Nov 2023 13:33:30 -0800 (PST) Received: from p-infra-ksmg-sc-msk02 (localhost [127.0.0.1]) by mx1.sberdevices.ru (Postfix) with ESMTP id 6C01C12000B; Thu, 30 Nov 2023 00:33:29 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.sberdevices.ru 6C01C12000B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=salutedevices.com; s=mail; t=1701293609; bh=GoimUPh9L30p7RZY7yeHkJr/9YYQSxAMQfLw71MjF0s=; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type:From; b=owV1vxrKFOAtZ3pw7RalvrcroXbQKXOXsr3jRbMmknsQOYZNZCTEHE6NNUTYx078d kiCNSTztLn7UAWFCTH+kG0MXrPYiV6+IXeG7v14zSKpUciWJGQVhV9oa70mKlr9XBK jbspSkxQJfGF6pM+6FOI3NF1hwqOgAlJamcQ7lJhEl+cgJD0JHwiTiYzDy8qKXZxJc 5eVHv2gZ8fSqGqtRc3SZVL5hD1jmFyj35Kyeqa4VJPJorQg1ZVN7caViO5W0qAebGI nPar0yB66BzVn4C90BayAyeRvZqjBmrsZfhAjPQfzJZV7gOkRSEtxAUh71H85Wd6p6 NnjOsskCkMw+g== Received: from p-i-exch-sc-m01.sberdevices.ru (p-i-exch-sc-m01.sberdevices.ru [172.16.192.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.sberdevices.ru (Postfix) with ESMTPS; Thu, 30 Nov 2023 00:33:29 +0300 (MSK) Received: from localhost.localdomain (100.64.160.123) by p-i-exch-sc-m01.sberdevices.ru (172.16.192.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Thu, 30 Nov 2023 00:33:28 +0300 From: Arseniy Krasnov To: Stefan Hajnoczi , Stefano Garzarella , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Bobby Eshleman CC: , , , , , , Subject: [RFC PATCH v4 2/3] virtio/vsock: send credit update during setting SO_RCVLOWAT Date: Thu, 30 Nov 2023 00:25:18 +0300 Message-ID: <20231129212519.2938875-3-avkrasnov@salutedevices.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20231129212519.2938875-1-avkrasnov@salutedevices.com> References: <20231129212519.2938875-1-avkrasnov@salutedevices.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: p-i-exch-sc-m01.sberdevices.ru (172.16.192.107) To p-i-exch-sc-m01.sberdevices.ru (172.16.192.107) X-KSMG-Rule-ID: 10 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Lua-Profiles: 181714 [Nov 29 2023] X-KSMG-AntiSpam-Version: 6.0.0.2 X-KSMG-AntiSpam-Envelope-From: avkrasnov@salutedevices.com X-KSMG-AntiSpam-Rate: 0 X-KSMG-AntiSpam-Status: not_detected X-KSMG-AntiSpam-Method: none X-KSMG-AntiSpam-Auth: dkim=none X-KSMG-AntiSpam-Info: LuaCore: 5 0.3.5 98d108ddd984cca1d7e65e595eac546a62b0144b, {Tracking_from_domain_doesnt_match_to}, p-i-exch-sc-m01.sberdevices.ru:7.1.1,5.0.1;100.64.160.123:7.1.2;d41d8cd98f00b204e9800998ecf8427e.com:7.1.1;127.0.0.199:7.1.2;salutedevices.com:7.1.1, FromAlignment: s, ApMailHostAddress: 100.64.160.123 X-MS-Exchange-Organization-SCL: -1 X-KSMG-AntiSpam-Interceptor-Info: scan successful X-KSMG-AntiPhishing: Clean X-KSMG-LinksScanning: Clean X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 2.0.1.6960, bases: 2023/11/29 19:17:00 #22574327 X-KSMG-AntiVirus-Status: Clean, skipped X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Send credit update message when SO_RCVLOWAT is updated and it is bigger than number of bytes in rx queue. It is needed, because 'poll()' will wait until number of bytes in rx queue will be not smaller than SO_RCVLOWAT, so kick sender to send more data. Otherwise mutual hungup for tx/rx is possible: sender waits for free space and receiver is waiting data in 'poll()'. Signed-off-by: Arseniy Krasnov --- Changelog: v1 -> v2: * Update commit message by removing 'This patch adds XXX' manner. * Do not initialize 'send_update' variable - set it directly during first usage. v3 -> v4: * Fit comment in 'virtio_transport_notify_set_rcvlowat()' to 80 chars. drivers/vhost/vsock.c | 3 ++- include/linux/virtio_vsock.h | 1 + net/vmw_vsock/virtio_transport.c | 3 ++- net/vmw_vsock/virtio_transport_common.c | 27 +++++++++++++++++++++++++ net/vmw_vsock/vsock_loopback.c | 3 ++- 5 files changed, 34 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index f75731396b7e..c5e58a60a546 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -449,8 +449,9 @@ static struct virtio_transport vhost_transport = { .notify_send_pre_enqueue = virtio_transport_notify_send_pre_enqueue, .notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue, .notify_buffer_size = virtio_transport_notify_buffer_size, + .notify_set_rcvlowat = virtio_transport_notify_set_rcvlowat, - .read_skb = virtio_transport_read_skb, + .read_skb = virtio_transport_read_skb }, .send_pkt = vhost_transport_send_pkt, diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h index ebb3ce63d64d..c82089dee0c8 100644 --- a/include/linux/virtio_vsock.h +++ b/include/linux/virtio_vsock.h @@ -256,4 +256,5 @@ void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit); void virtio_transport_deliver_tap_pkt(struct sk_buff *skb); int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *list); int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t read_actor); +int virtio_transport_notify_set_rcvlowat(struct vsock_sock *vsk, int val); #endif /* _LINUX_VIRTIO_VSOCK_H */ diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index af5bab1acee1..8b7bb7ca8ea5 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -537,8 +537,9 @@ static struct virtio_transport virtio_transport = { .notify_send_pre_enqueue = virtio_transport_notify_send_pre_enqueue, .notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue, .notify_buffer_size = virtio_transport_notify_buffer_size, + .notify_set_rcvlowat = virtio_transport_notify_set_rcvlowat, - .read_skb = virtio_transport_read_skb, + .read_skb = virtio_transport_read_skb }, .send_pkt = virtio_transport_send_pkt, diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index f6dc896bf44c..1cb556ad4597 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -1684,6 +1684,33 @@ int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_acto } EXPORT_SYMBOL_GPL(virtio_transport_read_skb); +int virtio_transport_notify_set_rcvlowat(struct vsock_sock *vsk, int val) +{ + struct virtio_vsock_sock *vvs = vsk->trans; + bool send_update; + + spin_lock_bh(&vvs->rx_lock); + + /* If number of available bytes is less than new SO_RCVLOWAT value, + * kick sender to send more data, because sender may sleep in its + * 'send()' syscall waiting for enough space at our side. + */ + send_update = vvs->rx_bytes < val; + + spin_unlock_bh(&vvs->rx_lock); + + if (send_update) { + int err; + + err = virtio_transport_send_credit_update(vsk); + if (err < 0) + return err; + } + + return 0; +} +EXPORT_SYMBOL_GPL(virtio_transport_notify_set_rcvlowat); + MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Asias He"); MODULE_DESCRIPTION("common code for virtio vsock"); diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c index 048640167411..454f69838c2a 100644 --- a/net/vmw_vsock/vsock_loopback.c +++ b/net/vmw_vsock/vsock_loopback.c @@ -96,8 +96,9 @@ static struct virtio_transport loopback_transport = { .notify_send_pre_enqueue = virtio_transport_notify_send_pre_enqueue, .notify_send_post_enqueue = virtio_transport_notify_send_post_enqueue, .notify_buffer_size = virtio_transport_notify_buffer_size, + .notify_set_rcvlowat = virtio_transport_notify_set_rcvlowat, - .read_skb = virtio_transport_read_skb, + .read_skb = virtio_transport_read_skb }, .send_pkt = vsock_loopback_send_pkt, From patchwork Wed Nov 29 21:25:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arseniy Krasnov X-Patchwork-Id: 13473504 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=salutedevices.com header.i=@salutedevices.com header.b="oihcbjWw" Received: from mx1.sberdevices.ru (mx1.sberdevices.ru [37.18.73.165]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2320F10D3; Wed, 29 Nov 2023 13:33:31 -0800 (PST) Received: from p-infra-ksmg-sc-msk01 (localhost [127.0.0.1]) by mx1.sberdevices.ru (Postfix) with ESMTP id AEF42100013; Thu, 30 Nov 2023 00:33:29 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.sberdevices.ru AEF42100013 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=salutedevices.com; s=mail; t=1701293609; bh=bLw/FHfqEfJzRK6jzSf4GnWy/cAn2FZc8a/X9moPiBs=; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type:From; b=oihcbjWwydIlLacQLGrImlS38MKvFQ5rrNXVM6q7DwogawrMpIi/Gn+sFlaWu4OJe CqQ18fsKtc2TqQs9qZk7YEAB73OsaW/VU/GMMRzrH4OisX3b7gj2oZ0VORUBRreWlC wDuAyxr4vHM6B0/75nb5VsGsqFWi4o7AYBIKw/JCgo98W+2WZ+8j9Ky/12BcEmCX6l NftgBCV71KPJIverWi9KPRLQxLXAxwsnvafI+EYlrPZ1dzG8XvxT8Otes3esV/lRRE Gkh5df7EVuZWzqvZvZg93V0cEA25p7LGCYGOiiUGHF1css1q9qSPF8S51IHsQWXoCt DldDwWkN/QBEQ== Received: from p-i-exch-sc-m01.sberdevices.ru (p-i-exch-sc-m01.sberdevices.ru [172.16.192.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.sberdevices.ru (Postfix) with ESMTPS; Thu, 30 Nov 2023 00:33:29 +0300 (MSK) Received: from localhost.localdomain (100.64.160.123) by p-i-exch-sc-m01.sberdevices.ru (172.16.192.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Thu, 30 Nov 2023 00:33:29 +0300 From: Arseniy Krasnov To: Stefan Hajnoczi , Stefano Garzarella , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "Michael S. Tsirkin" , Jason Wang , Bobby Eshleman CC: , , , , , , Subject: [RFC PATCH v4 3/3] vsock/test: SO_RCVLOWAT + deferred credit update test Date: Thu, 30 Nov 2023 00:25:19 +0300 Message-ID: <20231129212519.2938875-4-avkrasnov@salutedevices.com> X-Mailer: git-send-email 2.35.0 In-Reply-To: <20231129212519.2938875-1-avkrasnov@salutedevices.com> References: <20231129212519.2938875-1-avkrasnov@salutedevices.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: p-i-exch-sc-m01.sberdevices.ru (172.16.192.107) To p-i-exch-sc-m01.sberdevices.ru (172.16.192.107) X-KSMG-Rule-ID: 10 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Lua-Profiles: 181714 [Nov 29 2023] X-KSMG-AntiSpam-Version: 6.0.0.2 X-KSMG-AntiSpam-Envelope-From: avkrasnov@salutedevices.com X-KSMG-AntiSpam-Rate: 0 X-KSMG-AntiSpam-Status: not_detected X-KSMG-AntiSpam-Method: none X-KSMG-AntiSpam-Auth: dkim=none X-KSMG-AntiSpam-Info: LuaCore: 5 0.3.5 98d108ddd984cca1d7e65e595eac546a62b0144b, {Tracking_from_domain_doesnt_match_to}, 100.64.160.123:7.1.2;p-i-exch-sc-m01.sberdevices.ru:5.0.1,7.1.1;d41d8cd98f00b204e9800998ecf8427e.com:7.1.1;127.0.0.199:7.1.2;salutedevices.com:7.1.1, FromAlignment: s, ApMailHostAddress: 100.64.160.123 X-MS-Exchange-Organization-SCL: -1 X-KSMG-AntiSpam-Interceptor-Info: scan successful X-KSMG-AntiPhishing: Clean X-KSMG-LinksScanning: Clean X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 2.0.1.6960, bases: 2023/11/29 19:17:00 #22574327 X-KSMG-AntiVirus-Status: Clean, skipped X-Patchwork-State: RFC Test which checks, that updating SO_RCVLOWAT value also sends credit update message. Otherwise mutual hungup may happen when receiver didn't send credit update and then calls 'poll()' with non default SO_RCVLOWAT value (e.g. waiting enough bytes to read), while sender waits for free space at receiver's side. Important thing is that this test relies on kernel's define for maximum packet size for virtio transport and this value is not exported to user: VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (this define is used to control moment when to send credit update message). If this value or its usage will be changed in kernel - this test may become useless/broken. Signed-off-by: Arseniy Krasnov --- Changelog: v1 -> v2: * Update commit message by removing 'This patch adds XXX' manner. * Update commit message by adding details about dependency for this test from kernel internal define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE. * Add comment for this dependency in 'vsock_test.c' where this define is duplicated. v2 -> v3: * Replace synchronization based on control TCP socket with vsock data socket - this is needed to allow sender transmit data only when new buffer size of receiver is visible to sender. Otherwise there is race and test fails sometimes. v3 -> v4: * Replace 'recv_buf()' to 'recv(MSG_DONTWAIT)' in last read operation in server part. This is needed to ensure that 'poll()' wake up us when number of bytes ready to read is equal to SO_RCVLOWAT value. tools/testing/vsock/vsock_test.c | 149 +++++++++++++++++++++++++++++++ 1 file changed, 149 insertions(+) diff --git a/tools/testing/vsock/vsock_test.c b/tools/testing/vsock/vsock_test.c index 01fa816868bc..68f7037834db 100644 --- a/tools/testing/vsock/vsock_test.c +++ b/tools/testing/vsock/vsock_test.c @@ -1232,6 +1232,150 @@ static void test_double_bind_connect_client(const struct test_opts *opts) } } +#define RCVLOWAT_CREDIT_UPD_BUF_SIZE (1024 * 128) +/* This define is the same as in 'include/linux/virtio_vsock.h': + * it is used to decide when to send credit update message during + * reading from rx queue of a socket. Value and its usage in + * kernel is important for this test. + */ +#define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (1024 * 64) + +static void test_stream_rcvlowat_def_cred_upd_client(const struct test_opts *opts) +{ + size_t buf_size; + void *buf; + int fd; + + fd = vsock_stream_connect(opts->peer_cid, 1234); + if (fd < 0) { + perror("connect"); + exit(EXIT_FAILURE); + } + + /* Send 1 byte more than peer's buffer size. */ + buf_size = RCVLOWAT_CREDIT_UPD_BUF_SIZE + 1; + + buf = malloc(buf_size); + if (!buf) { + perror("malloc"); + exit(EXIT_FAILURE); + } + + /* Wait until peer sets needed buffer size. */ + recv_byte(fd, 1, 0); + + if (send(fd, buf, buf_size, 0) != buf_size) { + perror("send failed"); + exit(EXIT_FAILURE); + } + + free(buf); + close(fd); +} + +static void test_stream_rcvlowat_def_cred_upd_server(const struct test_opts *opts) +{ + size_t recv_buf_size; + struct pollfd fds; + size_t buf_size; + void *buf; + int fd; + + fd = vsock_stream_accept(VMADDR_CID_ANY, 1234, NULL); + if (fd < 0) { + perror("accept"); + exit(EXIT_FAILURE); + } + + buf_size = RCVLOWAT_CREDIT_UPD_BUF_SIZE; + + if (setsockopt(fd, AF_VSOCK, SO_VM_SOCKETS_BUFFER_SIZE, + &buf_size, sizeof(buf_size))) { + perror("setsockopt(SO_VM_SOCKETS_BUFFER_SIZE)"); + exit(EXIT_FAILURE); + } + + /* Send one dummy byte here, because 'setsockopt()' above also + * sends special packet which tells sender to update our buffer + * size. This 'send_byte()' will serialize such packet with data + * reads in a loop below. Sender starts transmission only when + * it receives this single byte. + */ + send_byte(fd, 1, 0); + + buf = malloc(buf_size); + if (!buf) { + perror("malloc"); + exit(EXIT_FAILURE); + } + + /* Wait until there will be 128KB of data in rx queue. */ + while (1) { + ssize_t res; + + res = recv(fd, buf, buf_size, MSG_PEEK); + if (res == buf_size) + break; + + if (res <= 0) { + fprintf(stderr, "unexpected 'recv()' return: %zi\n", res); + exit(EXIT_FAILURE); + } + } + + /* There is 128KB of data in the socket's rx queue, + * dequeue first 64KB, credit update is not sent. + */ + recv_buf_size = VIRTIO_VSOCK_MAX_PKT_BUF_SIZE; + recv_buf(fd, buf, recv_buf_size, 0, recv_buf_size); + recv_buf_size++; + + /* Updating SO_RCVLOWAT will send credit update. */ + if (setsockopt(fd, SOL_SOCKET, SO_RCVLOWAT, + &recv_buf_size, sizeof(recv_buf_size))) { + perror("setsockopt(SO_RCVLOWAT)"); + exit(EXIT_FAILURE); + } + + memset(&fds, 0, sizeof(fds)); + fds.fd = fd; + fds.events = POLLIN | POLLRDNORM | POLLERR | + POLLRDHUP | POLLHUP; + + /* This 'poll()' will return once we receive last byte + * sent by client. + */ + if (poll(&fds, 1, -1) < 0) { + perror("poll"); + exit(EXIT_FAILURE); + } + + if (fds.revents & POLLERR) { + fprintf(stderr, "'poll()' error\n"); + exit(EXIT_FAILURE); + } + + if (fds.revents & (POLLIN | POLLRDNORM)) { + ssize_t res; + + res = recv(fd, buf, recv_buf_size, MSG_DONTWAIT); + if (res != recv_buf_size) { + fprintf(stderr, "recv(2), expected %zu, got %zu\n", + recv_buf_size, res); + exit(EXIT_FAILURE); + } + } else { + /* These flags must be set, as there is at + * least 64KB of data ready to read. + */ + fprintf(stderr, "POLLIN | POLLRDNORM expected\n"); + exit(EXIT_FAILURE); + } + + free(buf); + close(fd); +} + static struct test_case test_cases[] = { { .name = "SOCK_STREAM connection reset", @@ -1342,6 +1486,11 @@ static struct test_case test_cases[] = { .run_client = test_double_bind_connect_client, .run_server = test_double_bind_connect_server, }, + { + .name = "SOCK_STREAM virtio SO_RCVLOWAT + deferred cred update", + .run_client = test_stream_rcvlowat_def_cred_upd_client, + .run_server = test_stream_rcvlowat_def_cred_upd_server, + }, {}, };