From patchwork Tue Dec 10 10:43:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 11281835 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2128E930 for ; Tue, 10 Dec 2019 10:43:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E9B7820838 for ; Tue, 10 Dec 2019 10:43:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="d7m+tdvp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727554AbfLJKne (ORCPT ); Tue, 10 Dec 2019 05:43:34 -0500 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:56380 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727539AbfLJKnc (ORCPT ); Tue, 10 Dec 2019 05:43:32 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1575974611; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8fTadIIM/EGcJyroW82UCutvScqyTCaxJaPZkU5T0yE=; b=d7m+tdvp7ak/qIxsyc1tQlkY+AVi2J8XX3qAqDS5UcWo3sCYokuWkC2VtHAzVGkuFmxb/8 hVe62YI54fzkWC8YRvLU5xW6yN4rWDA+ZPhYqIDLOWdUUvUBsKQTqar005KxCTmS9BUKFD QHkytaK4UiajC6kOsO0RI4w3sGVc7J4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-36-VouMmq0dMj-tIB48CAjylw-1; Tue, 10 Dec 2019 05:43:28 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5EA92104ED28; Tue, 10 Dec 2019 10:43:27 +0000 (UTC) Received: from steredhat.redhat.com (ovpn-117-168.ams2.redhat.com [10.36.117.168]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7246D605AF; Tue, 10 Dec 2019 10:43:25 +0000 (UTC) From: Stefano Garzarella To: netdev@vger.kernel.org, davem@davemloft.net Cc: Dexuan Cui , Jorgen Hansen , Stefan Hajnoczi , linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, Stefano Garzarella Subject: [PATCH net-next v2 6/6] vsock/virtio: remove loopback handling Date: Tue, 10 Dec 2019 11:43:07 +0100 Message-Id: <20191210104307.89346-7-sgarzare@redhat.com> In-Reply-To: <20191210104307.89346-1-sgarzare@redhat.com> References: <20191210104307.89346-1-sgarzare@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-MC-Unique: VouMmq0dMj-tIB48CAjylw-1 X-Mimecast-Spam-Score: 0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We can remove the loopback handling from virtio_transport, because now the vsock core is able to handle local communication using the new vsock_loopback device. Reviewed-by: Stefan Hajnoczi Signed-off-by: Stefano Garzarella --- net/vmw_vsock/virtio_transport.c | 61 ++------------------------------ 1 file changed, 2 insertions(+), 59 deletions(-) diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index 1458c5c8b64d..dfbaf6bd8b1c 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -44,10 +44,6 @@ struct virtio_vsock { spinlock_t send_pkt_list_lock; struct list_head send_pkt_list; - struct work_struct loopback_work; - spinlock_t loopback_list_lock; /* protects loopback_list */ - struct list_head loopback_list; - atomic_t queued_replies; /* The following fields are protected by rx_lock. vqs[VSOCK_VQ_RX] @@ -86,20 +82,6 @@ static u32 virtio_transport_get_local_cid(void) return ret; } -static int virtio_transport_send_pkt_loopback(struct virtio_vsock *vsock, - struct virtio_vsock_pkt *pkt) -{ - int len = pkt->len; - - spin_lock_bh(&vsock->loopback_list_lock); - list_add_tail(&pkt->list, &vsock->loopback_list); - spin_unlock_bh(&vsock->loopback_list_lock); - - queue_work(virtio_vsock_workqueue, &vsock->loopback_work); - - return len; -} - static void virtio_transport_send_pkt_work(struct work_struct *work) { @@ -194,7 +176,8 @@ virtio_transport_send_pkt(struct virtio_vsock_pkt *pkt) } if (le64_to_cpu(pkt->hdr.dst_cid) == vsock->guest_cid) { - len = virtio_transport_send_pkt_loopback(vsock, pkt); + virtio_transport_free_pkt(pkt); + len = -ENODEV; goto out_rcu; } @@ -502,33 +485,6 @@ static struct virtio_transport virtio_transport = { .send_pkt = virtio_transport_send_pkt, }; -static void virtio_transport_loopback_work(struct work_struct *work) -{ - struct virtio_vsock *vsock = - container_of(work, struct virtio_vsock, loopback_work); - LIST_HEAD(pkts); - - spin_lock_bh(&vsock->loopback_list_lock); - list_splice_init(&vsock->loopback_list, &pkts); - spin_unlock_bh(&vsock->loopback_list_lock); - - mutex_lock(&vsock->rx_lock); - - if (!vsock->rx_run) - goto out; - - while (!list_empty(&pkts)) { - struct virtio_vsock_pkt *pkt; - - pkt = list_first_entry(&pkts, struct virtio_vsock_pkt, list); - list_del_init(&pkt->list); - - virtio_transport_recv_pkt(&virtio_transport, pkt); - } -out: - mutex_unlock(&vsock->rx_lock); -} - static void virtio_transport_rx_work(struct work_struct *work) { struct virtio_vsock *vsock = @@ -633,13 +589,10 @@ static int virtio_vsock_probe(struct virtio_device *vdev) mutex_init(&vsock->event_lock); spin_lock_init(&vsock->send_pkt_list_lock); INIT_LIST_HEAD(&vsock->send_pkt_list); - spin_lock_init(&vsock->loopback_list_lock); - INIT_LIST_HEAD(&vsock->loopback_list); INIT_WORK(&vsock->rx_work, virtio_transport_rx_work); INIT_WORK(&vsock->tx_work, virtio_transport_tx_work); INIT_WORK(&vsock->event_work, virtio_transport_event_work); INIT_WORK(&vsock->send_pkt_work, virtio_transport_send_pkt_work); - INIT_WORK(&vsock->loopback_work, virtio_transport_loopback_work); mutex_lock(&vsock->tx_lock); vsock->tx_run = true; @@ -720,22 +673,12 @@ static void virtio_vsock_remove(struct virtio_device *vdev) } spin_unlock_bh(&vsock->send_pkt_list_lock); - spin_lock_bh(&vsock->loopback_list_lock); - while (!list_empty(&vsock->loopback_list)) { - pkt = list_first_entry(&vsock->loopback_list, - struct virtio_vsock_pkt, list); - list_del(&pkt->list); - virtio_transport_free_pkt(pkt); - } - spin_unlock_bh(&vsock->loopback_list_lock); - /* Delete virtqueues and flush outstanding callbacks if any */ vdev->config->del_vqs(vdev); /* Other works can be queued before 'config->del_vqs()', so we flush * all works before to free the vsock object to avoid use after free. */ - flush_work(&vsock->loopback_work); flush_work(&vsock->rx_work); flush_work(&vsock->tx_work); flush_work(&vsock->event_work);