From patchwork Fri Jan 30 13:46:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baptiste Reynal X-Patchwork-Id: 5750281 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id F3C549F358 for ; Fri, 30 Jan 2015 13:49:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1C61E20279 for ; Fri, 30 Jan 2015 13:49:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 24C3E2028D for ; Fri, 30 Jan 2015 13:49:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964950AbbA3Nt3 (ORCPT ); Fri, 30 Jan 2015 08:49:29 -0500 Received: from mail-we0-f173.google.com ([74.125.82.173]:50626 "EHLO mail-we0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964913AbbA3Nt1 (ORCPT ); Fri, 30 Jan 2015 08:49:27 -0500 Received: by mail-we0-f173.google.com with SMTP id w62so27248395wes.4 for ; Fri, 30 Jan 2015 05:49:26 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=gKEMTKPqsbnmh2rKsc3uPX6dQAF4T+1sFTibpVHHadk=; b=ByL59pp2vSstdJDYRfFYn7aJeCwAPYAWQjqOKOwOlHWQ2Lxj39mlXu/sRzXJ33qLIO CD/kFWpxAWeaVun2REfAPpTJGIuDld4twbiMe/F+EoGztqS4aNQ76L62OjTglOI7s9ar REXNOpUgxxXMnwO/7eC2de++IXBINxkdR78v9SJDoSoXbFIYrQrX/vfrSwrYXE8om72x EyGZ7gRRPtYd/NRJZ/nRFe6++x90U90wwv35OoITP1S0IdIMoWoGWsC48LxHuxJNEo/u GCwMwrWkiG2+sQGffmu6fqpdohsCiBOUTYzuE7OZYP9zfwFb0uj2vwepWoSf50ZeQEt7 gQzA== X-Gm-Message-State: ALoCoQl1zKLM7BTZ4wMzI37Tk5CKYCno4ZUc/sObCTgEdAFjKpJFoKL6bSw3qpLFnEPTmJhBSgjl X-Received: by 10.194.86.135 with SMTP id p7mr11967392wjz.89.1422625765147; Fri, 30 Jan 2015 05:49:25 -0800 (PST) Received: from localhost (LPuteaux-656-1-278-113.w80-15.abo.wanadoo.fr. [80.15.154.113]) by mx.google.com with ESMTPSA id e4sm15009801wjw.48.2015.01.30.05.49.24 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 30 Jan 2015 05:49:24 -0800 (PST) From: Baptiste Reynal To: kvmarm@lists.cs.columbia.edu, iommu@lists.linux-foundation.org, alex.williamson@redhat.com Cc: will.deacon@arm.com, tech@virtualopensystems.com, christoffer.dall@linaro.org, eric.auger@linaro.org, kim.phillips@freescale.com, marc.zyngier@arm.com, Antonios Motakis , Bjorn Helgaas , Alexander Gordeev , Thomas Gleixner , Gavin Shan , Jiang Liu , kvm@vger.kernel.org (open list:VFIO DRIVER), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v13 14/18] vfio: add local lock for virqfd instead of depending on VFIO PCI Date: Fri, 30 Jan 2015 14:46:20 +0100 Message-Id: <1422625584-3741-15-git-send-email-b.reynal@virtualopensystems.com> X-Mailer: git-send-email 2.2.2 In-Reply-To: <1422625584-3741-1-git-send-email-b.reynal@virtualopensystems.com> References: <1422625584-3741-1-git-send-email-b.reynal@virtualopensystems.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Antonios Motakis The Virqfd code needs to keep accesses to any struct *virqfd safe, but this comes into play only when creating or destroying eventfds, so sharing the same spinlock with the VFIO bus driver is not necessary. Signed-off-by: Antonios Motakis --- drivers/vfio/pci/vfio_pci_intrs.c | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index a5378d5..b35bc16 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -44,6 +44,7 @@ struct virqfd { }; static struct workqueue_struct *vfio_irqfd_cleanup_wq; +DEFINE_SPINLOCK(virqfd_lock); int __init vfio_virqfd_init(void) { @@ -80,21 +81,21 @@ static int virqfd_wakeup(wait_queue_t *wait, unsigned mode, int sync, void *key) if (flags & POLLHUP) { unsigned long flags; - spin_lock_irqsave(&virqfd->vdev->irqlock, flags); + spin_lock_irqsave(&virqfd_lock, flags); /* * The eventfd is closing, if the virqfd has not yet been * queued for release, as determined by testing whether the - * vdev pointer to it is still valid, queue it now. As + * virqfd pointer to it is still valid, queue it now. As * with kvm irqfds, we know we won't race against the virqfd - * going away because we hold wqh->lock to get here. + * going away because we hold the lock to get here. */ if (*(virqfd->pvirqfd) == virqfd) { *(virqfd->pvirqfd) = NULL; virqfd_deactivate(virqfd); } - spin_unlock_irqrestore(&virqfd->vdev->irqlock, flags); + spin_unlock_irqrestore(&virqfd_lock, flags); } return 0; @@ -170,16 +171,16 @@ int vfio_virqfd_enable(struct vfio_pci_device *vdev, * we update the pointer to the virqfd under lock to avoid * pushing multiple jobs to release the same virqfd. */ - spin_lock_irq(&vdev->irqlock); + spin_lock_irq(&virqfd_lock); if (*pvirqfd) { - spin_unlock_irq(&vdev->irqlock); + spin_unlock_irq(&virqfd_lock); ret = -EBUSY; goto err_busy; } *pvirqfd = virqfd; - spin_unlock_irq(&vdev->irqlock); + spin_unlock_irq(&virqfd_lock); /* * Install our own custom wake-up handling so we are notified via @@ -217,18 +218,18 @@ err_fd: } EXPORT_SYMBOL_GPL(vfio_virqfd_enable); -void vfio_virqfd_disable(struct vfio_pci_device *vdev, struct virqfd **pvirqfd) +void vfio_virqfd_disable(struct virqfd **pvirqfd) { unsigned long flags; - spin_lock_irqsave(&vdev->irqlock, flags); + spin_lock_irqsave(&virqfd_lock, flags); if (*pvirqfd) { virqfd_deactivate(*pvirqfd); *pvirqfd = NULL; } - spin_unlock_irqrestore(&vdev->irqlock, flags); + spin_unlock_irqrestore(&virqfd_lock, flags); /* * Block until we know all outstanding shutdown jobs have completed. @@ -441,8 +442,8 @@ static int vfio_intx_set_signal(struct vfio_pci_device *vdev, int fd) static void vfio_intx_disable(struct vfio_pci_device *vdev) { vfio_intx_set_signal(vdev, -1); - vfio_virqfd_disable(vdev, &vdev->ctx[0].unmask); - vfio_virqfd_disable(vdev, &vdev->ctx[0].mask); + vfio_virqfd_disable(&vdev->ctx[0].unmask); + vfio_virqfd_disable(&vdev->ctx[0].mask); vdev->irq_type = VFIO_PCI_NUM_IRQS; vdev->num_ctx = 0; kfree(vdev->ctx); @@ -606,8 +607,8 @@ static void vfio_msi_disable(struct vfio_pci_device *vdev, bool msix) vfio_msi_set_block(vdev, 0, vdev->num_ctx, NULL, msix); for (i = 0; i < vdev->num_ctx; i++) { - vfio_virqfd_disable(vdev, &vdev->ctx[i].unmask); - vfio_virqfd_disable(vdev, &vdev->ctx[i].mask); + vfio_virqfd_disable(&vdev->ctx[i].unmask); + vfio_virqfd_disable(&vdev->ctx[i].mask); } if (msix) { @@ -645,7 +646,7 @@ static int vfio_pci_set_intx_unmask(struct vfio_pci_device *vdev, vfio_send_intx_eventfd, NULL, &vdev->ctx[0].unmask, fd); - vfio_virqfd_disable(vdev, &vdev->ctx[0].unmask); + vfio_virqfd_disable(&vdev->ctx[0].unmask); } return 0;