From patchwork Mon Mar 2 16:59:58 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baptiste Reynal X-Patchwork-Id: 5915261 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id D85A1BF440 for ; Mon, 2 Mar 2015 17:04:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CDBAD201EF for ; Mon, 2 Mar 2015 17:04:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 34DA4201EC for ; Mon, 2 Mar 2015 17:04:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756160AbbCBRD3 (ORCPT ); Mon, 2 Mar 2015 12:03:29 -0500 Received: from mail-wi0-f173.google.com ([209.85.212.173]:44742 "EHLO mail-wi0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756157AbbCBRD0 (ORCPT ); Mon, 2 Mar 2015 12:03:26 -0500 Received: by wiwl15 with SMTP id l15so16455400wiw.3 for ; Mon, 02 Mar 2015 09:03:24 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YN91DJtHFlxi5AW8Ex4qiwSLs6HQaApTHFaoo2WZMdA=; b=KlhWIdGPyh1g05BJLYUYeBZ+qPdKazIRCZ+qL9UUf6l3TkZ9xIguRml70jYKFMDZrf Wui92fDvZOIPjJZGx/o2XVqkevuXc0VIZ9GPPjZSAHxnZNPOvd/nHgA31yTFgNOWThc5 KYG4mW/mUUGtICV2EhsnZIt034O3to46lh45iAfin9KeIJsDtuOYlvGD5tn27d8VmeJY 7HLSv8Pe7Nj4uuNtjWdRhANVL7Eb8rzut6NrfsWsGzgnFOT1yKERvUzOaJ4g6PhSyFUI S+j22MupCWihctdVhlh9xinpsOydrD9s+by0X3v2ohT7VHZJxv+qbU5oBPuICWcrwBZO qzkQ== X-Gm-Message-State: ALoCoQkDuIcMS5aB7COKKzJ4wBsxk0O0Ix3/MUyC2xK5OSHBrpy77pk8XzQNYGxZa8ZeEX9fnyy8 X-Received: by 10.180.101.225 with SMTP id fj1mr37777104wib.56.1425315804748; Mon, 02 Mar 2015 09:03:24 -0800 (PST) Received: from localhost (LPuteaux-656-1-278-113.w80-15.abo.wanadoo.fr. [80.15.154.113]) by mx.google.com with ESMTPSA id dz6sm16881375wib.0.2015.03.02.09.03.23 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 02 Mar 2015 09:03:23 -0800 (PST) From: Baptiste Reynal To: iommu@lists.linux-foundation.org, kvmarm@lists.cs.columbia.edu Cc: tech@virtualopensystems.com, Antonios Motakis , Baptiste Reynal , Alex Williamson , Gavin Shan , Thomas Gleixner , Jiang Liu , linux-kernel@vger.kernel.org (open list), kvm@vger.kernel.org (open list:VFIO DRIVER) Subject: [PATCH v14 18/20] vfio: move eventfd support code for VFIO_PCI to a separate file Date: Mon, 2 Mar 2015 17:59:58 +0100 Message-Id: <1425315600-29761-19-git-send-email-b.reynal@virtualopensystems.com> X-Mailer: git-send-email 2.3.1 In-Reply-To: <1425315600-29761-1-git-send-email-b.reynal@virtualopensystems.com> References: <1425315600-29761-1-git-send-email-b.reynal@virtualopensystems.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Antonios Motakis The virqfd functionality that is used by VFIO_PCI to implement interrupt masking and unmasking via an eventfd, is generic enough and can be reused by another driver. Move it to a separate file in order to allow the code to be shared. Signed-off-by: Antonios Motakis Signed-off-by: Baptiste Reynal --- drivers/vfio/pci/Makefile | 3 +- drivers/vfio/pci/vfio_pci_intrs.c | 215 ------------------------------------ drivers/vfio/pci/vfio_pci_private.h | 3 - drivers/vfio/virqfd.c | 213 +++++++++++++++++++++++++++++++++++ include/linux/vfio.h | 27 +++++ 5 files changed, 242 insertions(+), 219 deletions(-) create mode 100644 drivers/vfio/virqfd.c diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile index 1310792..c7c8644 100644 --- a/drivers/vfio/pci/Makefile +++ b/drivers/vfio/pci/Makefile @@ -1,4 +1,5 @@ -vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o +vfio-pci-y := vfio_pci.o vfio_pci_intrs.o vfio_pci_rdwr.o vfio_pci_config.o \ + ../virqfd.o obj-$(CONFIG_VFIO_PCI) += vfio-pci.o diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 1a16da3..da6616e 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -19,228 +19,13 @@ #include #include #include -#include #include #include -#include #include #include "vfio_pci_private.h" /* - * IRQfd - generic - */ -struct virqfd { - void *opaque; - struct eventfd_ctx *eventfd; - int (*handler)(void *, void *); - void (*thread)(void *, void *); - void *data; - struct work_struct inject; - wait_queue_t wait; - poll_table pt; - struct work_struct shutdown; - struct virqfd **pvirqfd; -}; - -static struct workqueue_struct *vfio_irqfd_cleanup_wq; -DEFINE_SPINLOCK(virqfd_lock); - -int __init vfio_virqfd_init(void) -{ - vfio_irqfd_cleanup_wq = - create_singlethread_workqueue("vfio-irqfd-cleanup"); - if (!vfio_irqfd_cleanup_wq) - return -ENOMEM; - - return 0; -} - -void vfio_virqfd_exit(void) -{ - destroy_workqueue(vfio_irqfd_cleanup_wq); -} - -static void virqfd_deactivate(struct virqfd *virqfd) -{ - queue_work(vfio_irqfd_cleanup_wq, &virqfd->shutdown); -} - -static int virqfd_wakeup(wait_queue_t *wait, unsigned mode, int sync, void *key) -{ - struct virqfd *virqfd = container_of(wait, struct virqfd, wait); - unsigned long flags = (unsigned long)key; - - if (flags & POLLIN) { - /* An event has been signaled, call function */ - if ((!virqfd->handler || - virqfd->handler(virqfd->opaque, virqfd->data)) && - virqfd->thread) - schedule_work(&virqfd->inject); - } - - if (flags & POLLHUP) { - unsigned long flags; - spin_lock_irqsave(&virqfd_lock, flags); - - /* - * The eventfd is closing, if the virqfd has not yet been - * queued for release, as determined by testing whether the - * virqfd pointer to it is still valid, queue it now. As - * with kvm irqfds, we know we won't race against the virqfd - * going away because we hold the lock to get here. - */ - if (*(virqfd->pvirqfd) == virqfd) { - *(virqfd->pvirqfd) = NULL; - virqfd_deactivate(virqfd); - } - - spin_unlock_irqrestore(&virqfd_lock, flags); - } - - return 0; -} - -static void virqfd_ptable_queue_proc(struct file *file, - wait_queue_head_t *wqh, poll_table *pt) -{ - struct virqfd *virqfd = container_of(pt, struct virqfd, pt); - add_wait_queue(wqh, &virqfd->wait); -} - -static void virqfd_shutdown(struct work_struct *work) -{ - struct virqfd *virqfd = container_of(work, struct virqfd, shutdown); - u64 cnt; - - eventfd_ctx_remove_wait_queue(virqfd->eventfd, &virqfd->wait, &cnt); - flush_work(&virqfd->inject); - eventfd_ctx_put(virqfd->eventfd); - - kfree(virqfd); -} - -static void virqfd_inject(struct work_struct *work) -{ - struct virqfd *virqfd = container_of(work, struct virqfd, inject); - if (virqfd->thread) - virqfd->thread(virqfd->opaque, virqfd->data); -} - -int vfio_virqfd_enable(void *opaque, - int (*handler)(void *, void *), - void (*thread)(void *, void *), - void *data, struct virqfd **pvirqfd, int fd) -{ - struct fd irqfd; - struct eventfd_ctx *ctx; - struct virqfd *virqfd; - int ret = 0; - unsigned int events; - - virqfd = kzalloc(sizeof(*virqfd), GFP_KERNEL); - if (!virqfd) - return -ENOMEM; - - virqfd->pvirqfd = pvirqfd; - virqfd->opaque = opaque; - virqfd->handler = handler; - virqfd->thread = thread; - virqfd->data = data; - - INIT_WORK(&virqfd->shutdown, virqfd_shutdown); - INIT_WORK(&virqfd->inject, virqfd_inject); - - irqfd = fdget(fd); - if (!irqfd.file) { - ret = -EBADF; - goto err_fd; - } - - ctx = eventfd_ctx_fileget(irqfd.file); - if (IS_ERR(ctx)) { - ret = PTR_ERR(ctx); - goto err_ctx; - } - - virqfd->eventfd = ctx; - - /* - * virqfds can be released by closing the eventfd or directly - * through ioctl. These are both done through a workqueue, so - * we update the pointer to the virqfd under lock to avoid - * pushing multiple jobs to release the same virqfd. - */ - spin_lock_irq(&virqfd_lock); - - if (*pvirqfd) { - spin_unlock_irq(&virqfd_lock); - ret = -EBUSY; - goto err_busy; - } - *pvirqfd = virqfd; - - spin_unlock_irq(&virqfd_lock); - - /* - * Install our own custom wake-up handling so we are notified via - * a callback whenever someone signals the underlying eventfd. - */ - init_waitqueue_func_entry(&virqfd->wait, virqfd_wakeup); - init_poll_funcptr(&virqfd->pt, virqfd_ptable_queue_proc); - - events = irqfd.file->f_op->poll(irqfd.file, &virqfd->pt); - - /* - * Check if there was an event already pending on the eventfd - * before we registered and trigger it as if we didn't miss it. - */ - if (events & POLLIN) { - if ((!handler || handler(opaque, data)) && thread) - schedule_work(&virqfd->inject); - } - - /* - * Do not drop the file until the irqfd is fully initialized, - * otherwise we might race against the POLLHUP. - */ - fdput(irqfd); - - return 0; -err_busy: - eventfd_ctx_put(ctx); -err_ctx: - fdput(irqfd); -err_fd: - kfree(virqfd); - - return ret; -} -EXPORT_SYMBOL_GPL(vfio_virqfd_enable); - -void vfio_virqfd_disable(struct virqfd **pvirqfd) -{ - unsigned long flags; - - spin_lock_irqsave(&virqfd_lock, flags); - - if (*pvirqfd) { - virqfd_deactivate(*pvirqfd); - *pvirqfd = NULL; - } - - spin_unlock_irqrestore(&virqfd_lock, flags); - - /* - * Block until we know all outstanding shutdown jobs have completed. - * Even if we don't queue the job, flush the wq to be sure it's - * been released. - */ - flush_workqueue(vfio_irqfd_cleanup_wq); -} -EXPORT_SYMBOL_GPL(vfio_virqfd_disable); - -/* * INTx */ static void vfio_send_intx_eventfd(void *opaque, void *unused) diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h index 0253965..ae0e1b4 100644 --- a/drivers/vfio/pci/vfio_pci_private.h +++ b/drivers/vfio/pci/vfio_pci_private.h @@ -87,9 +87,6 @@ extern ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf, extern int vfio_pci_init_perm_bits(void); extern void vfio_pci_uninit_perm_bits(void); -extern int vfio_virqfd_init(void); -extern void vfio_virqfd_exit(void); - extern int vfio_config_init(struct vfio_pci_device *vdev); extern void vfio_config_free(struct vfio_pci_device *vdev); #endif /* VFIO_PCI_PRIVATE_H */ diff --git a/drivers/vfio/virqfd.c b/drivers/vfio/virqfd.c new file mode 100644 index 0000000..5967899 --- /dev/null +++ b/drivers/vfio/virqfd.c @@ -0,0 +1,213 @@ +/* + * VFIO generic eventfd code for IRQFD support. + * Derived from drivers/vfio/pci/vfio_pci_intrs.c + * + * Copyright (C) 2012 Red Hat, Inc. All rights reserved. + * Author: Alex Williamson + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include + +static struct workqueue_struct *vfio_irqfd_cleanup_wq; +DEFINE_SPINLOCK(virqfd_lock); + +int __init vfio_virqfd_init(void) +{ + vfio_irqfd_cleanup_wq = + create_singlethread_workqueue("vfio-irqfd-cleanup"); + if (!vfio_irqfd_cleanup_wq) + return -ENOMEM; + + return 0; +} + +void vfio_virqfd_exit(void) +{ + destroy_workqueue(vfio_irqfd_cleanup_wq); +} + +static void virqfd_deactivate(struct virqfd *virqfd) +{ + queue_work(vfio_irqfd_cleanup_wq, &virqfd->shutdown); +} + +static int virqfd_wakeup(wait_queue_t *wait, unsigned mode, int sync, void *key) +{ + struct virqfd *virqfd = container_of(wait, struct virqfd, wait); + unsigned long flags = (unsigned long)key; + + if (flags & POLLIN) { + /* An event has been signaled, call function */ + if ((!virqfd->handler || + virqfd->handler(virqfd->opaque, virqfd->data)) && + virqfd->thread) + schedule_work(&virqfd->inject); + } + + if (flags & POLLHUP) { + unsigned long flags; + spin_lock_irqsave(&virqfd_lock, flags); + + /* + * The eventfd is closing, if the virqfd has not yet been + * queued for release, as determined by testing whether the + * virqfd pointer to it is still valid, queue it now. As + * with kvm irqfds, we know we won't race against the virqfd + * going away because we hold the lock to get here. + */ + if (*(virqfd->pvirqfd) == virqfd) { + *(virqfd->pvirqfd) = NULL; + virqfd_deactivate(virqfd); + } + + spin_unlock_irqrestore(&virqfd_lock, flags); + } + + return 0; +} + +static void virqfd_ptable_queue_proc(struct file *file, + wait_queue_head_t *wqh, poll_table *pt) +{ + struct virqfd *virqfd = container_of(pt, struct virqfd, pt); + add_wait_queue(wqh, &virqfd->wait); +} + +static void virqfd_shutdown(struct work_struct *work) +{ + struct virqfd *virqfd = container_of(work, struct virqfd, shutdown); + u64 cnt; + + eventfd_ctx_remove_wait_queue(virqfd->eventfd, &virqfd->wait, &cnt); + flush_work(&virqfd->inject); + eventfd_ctx_put(virqfd->eventfd); + + kfree(virqfd); +} + +static void virqfd_inject(struct work_struct *work) +{ + struct virqfd *virqfd = container_of(work, struct virqfd, inject); + if (virqfd->thread) + virqfd->thread(virqfd->opaque, virqfd->data); +} + +int vfio_virqfd_enable(void *opaque, + int (*handler)(void *, void *), + void (*thread)(void *, void *), + void *data, struct virqfd **pvirqfd, int fd) +{ + struct fd irqfd; + struct eventfd_ctx *ctx; + struct virqfd *virqfd; + int ret = 0; + unsigned int events; + + virqfd = kzalloc(sizeof(*virqfd), GFP_KERNEL); + if (!virqfd) + return -ENOMEM; + + virqfd->pvirqfd = pvirqfd; + virqfd->opaque = opaque; + virqfd->handler = handler; + virqfd->thread = thread; + virqfd->data = data; + + INIT_WORK(&virqfd->shutdown, virqfd_shutdown); + INIT_WORK(&virqfd->inject, virqfd_inject); + + irqfd = fdget(fd); + if (!irqfd.file) { + ret = -EBADF; + goto err_fd; + } + + ctx = eventfd_ctx_fileget(irqfd.file); + if (IS_ERR(ctx)) { + ret = PTR_ERR(ctx); + goto err_ctx; + } + + virqfd->eventfd = ctx; + + /* + * virqfds can be released by closing the eventfd or directly + * through ioctl. These are both done through a workqueue, so + * we update the pointer to the virqfd under lock to avoid + * pushing multiple jobs to release the same virqfd. + */ + spin_lock_irq(&virqfd_lock); + + if (*pvirqfd) { + spin_unlock_irq(&virqfd_lock); + ret = -EBUSY; + goto err_busy; + } + *pvirqfd = virqfd; + + spin_unlock_irq(&virqfd_lock); + + /* + * Install our own custom wake-up handling so we are notified via + * a callback whenever someone signals the underlying eventfd. + */ + init_waitqueue_func_entry(&virqfd->wait, virqfd_wakeup); + init_poll_funcptr(&virqfd->pt, virqfd_ptable_queue_proc); + + events = irqfd.file->f_op->poll(irqfd.file, &virqfd->pt); + + /* + * Check if there was an event already pending on the eventfd + * before we registered and trigger it as if we didn't miss it. + */ + if (events & POLLIN) { + if ((!handler || handler(opaque, data)) && thread) + schedule_work(&virqfd->inject); + } + + /* + * Do not drop the file until the irqfd is fully initialized, + * otherwise we might race against the POLLHUP. + */ + fdput(irqfd); + + return 0; +err_busy: + eventfd_ctx_put(ctx); +err_ctx: + fdput(irqfd); +err_fd: + kfree(virqfd); + + return ret; +} +EXPORT_SYMBOL_GPL(vfio_virqfd_enable); + +void vfio_virqfd_disable(struct virqfd **pvirqfd) +{ + unsigned long flags; + + spin_lock_irqsave(&virqfd_lock, flags); + + if (*pvirqfd) { + virqfd_deactivate(*pvirqfd); + *pvirqfd = NULL; + } + + spin_unlock_irqrestore(&virqfd_lock, flags); + + /* + * Block until we know all outstanding shutdown jobs have completed. + * Even if we don't queue the job, flush the wq to be sure it's + * been released. + */ + flush_workqueue(vfio_irqfd_cleanup_wq); +} +EXPORT_SYMBOL_GPL(vfio_virqfd_disable); diff --git a/include/linux/vfio.h b/include/linux/vfio.h index 2d67b89..683b514 100644 --- a/include/linux/vfio.h +++ b/include/linux/vfio.h @@ -14,6 +14,8 @@ #include #include +#include +#include #include /** @@ -123,4 +125,29 @@ static inline long vfio_spapr_iommu_eeh_ioctl(struct iommu_group *group, return -ENOTTY; } #endif /* CONFIG_EEH */ + +/* + * IRQfd - generic + */ +struct virqfd { + void *opaque; + struct eventfd_ctx *eventfd; + int (*handler)(void *, void *); + void (*thread)(void *, void *); + void *data; + struct work_struct inject; + wait_queue_t wait; + poll_table pt; + struct work_struct shutdown; + struct virqfd **pvirqfd; +}; + +extern int vfio_virqfd_init(void); +extern void vfio_virqfd_exit(void); +extern int vfio_virqfd_enable(void *opaque, + int (*handler)(void *, void *), + void (*thread)(void *, void *), + void *data, struct virqfd **pvirqfd, int fd); +extern void vfio_virqfd_disable(struct virqfd **pvirqfd); + #endif /* VFIO_H */