From patchwork Mon Nov 5 17:33:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 10668813 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8BD751751 for ; Mon, 5 Nov 2018 17:33:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7675B29B54 for ; Mon, 5 Nov 2018 17:33:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6A43C29BA7; Mon, 5 Nov 2018 17:33:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,SUBJ_OBFU_PUNCT_FEW autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0F3A029B54 for ; Mon, 5 Nov 2018 17:33:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387658AbeKFCyg (ORCPT ); Mon, 5 Nov 2018 21:54:36 -0500 Received: from mx1.redhat.com ([209.132.183.28]:58294 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387515AbeKFCyg (ORCPT ); Mon, 5 Nov 2018 21:54:36 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 872DF3002A96 for ; Mon, 5 Nov 2018 17:33:53 +0000 (UTC) Received: from localhost (ovpn-116-38.ams2.redhat.com [10.36.116.38]) by smtp.corp.redhat.com (Postfix) with ESMTP id BF18C600D7; Mon, 5 Nov 2018 17:33:47 +0000 (UTC) From: Stefan Hajnoczi To: kvm@vger.kernel.org Cc: jasowang@redhat.com, "Michael S. Tsirkin" , Stefan Hajnoczi Subject: [PATCH] vhost/vsock: switch to a mutex for vhost_vsock_hash Date: Mon, 5 Nov 2018 17:33:22 +0000 Message-Id: <20181105173322.3463-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.47]); Mon, 05 Nov 2018 17:33:53 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that there are no more data path users of vhost_vsock_lock, it can be turned into a mutex. It's only used by .release() and in the .ioctl() path. Depends-on: <20181105103547.22018-1-stefanha@redhat.com> Suggested-by: Jason Wang Signed-off-by: Stefan Hajnoczi Acked-by: Jason Wang --- Hi Jason, Thanks for pointing out that spin_lock_bh() is no longer necessary. Let's switch to a mutex. --- drivers/vhost/vsock.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index 51879ed18652..d95b0c04cc9b 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -27,14 +27,14 @@ enum { }; /* Used to track all the vhost_vsock instances on the system. */ -static DEFINE_SPINLOCK(vhost_vsock_lock); +static DEFINE_MUTEX(vhost_vsock_mutex); static DEFINE_READ_MOSTLY_HASHTABLE(vhost_vsock_hash, 8); struct vhost_vsock { struct vhost_dev dev; struct vhost_virtqueue vqs[2]; - /* Link to global vhost_vsock_hash, writes use vhost_vsock_lock */ + /* Link to global vhost_vsock_hash, writes use vhost_vsock_mutex */ struct hlist_node hash; struct vhost_work send_pkt_work; @@ -51,7 +51,7 @@ static u32 vhost_transport_get_local_cid(void) return VHOST_VSOCK_DEFAULT_HOST_CID; } -/* Callers that dereference the return value must hold vhost_vsock_lock or the +/* Callers that dereference the return value must hold vhost_vsock_mutex or the * RCU read lock. */ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) @@ -576,10 +576,10 @@ static int vhost_vsock_dev_release(struct inode *inode, struct file *file) { struct vhost_vsock *vsock = file->private_data; - spin_lock_bh(&vhost_vsock_lock); + mutex_lock(&vhost_vsock_mutex); if (vsock->guest_cid) hash_del_rcu(&vsock->hash); - spin_unlock_bh(&vhost_vsock_lock); + mutex_unlock(&vhost_vsock_mutex); /* Wait for other CPUs to finish using vsock */ synchronize_rcu(); @@ -623,10 +623,10 @@ static int vhost_vsock_set_cid(struct vhost_vsock *vsock, u64 guest_cid) return -EINVAL; /* Refuse if CID is already in use */ - spin_lock_bh(&vhost_vsock_lock); + mutex_lock(&vhost_vsock_mutex); other = vhost_vsock_get(guest_cid); if (other && other != vsock) { - spin_unlock_bh(&vhost_vsock_lock); + mutex_unlock(&vhost_vsock_mutex); return -EADDRINUSE; } @@ -635,7 +635,7 @@ static int vhost_vsock_set_cid(struct vhost_vsock *vsock, u64 guest_cid) vsock->guest_cid = guest_cid; hash_add_rcu(vhost_vsock_hash, &vsock->hash, guest_cid); - spin_unlock_bh(&vhost_vsock_lock); + mutex_unlock(&vhost_vsock_mutex); return 0; }