From patchwork Thu Dec 4 21:03:25 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 5440181 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 9BCED9F665 for ; Thu, 4 Dec 2014 21:04:10 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C483E20254 for ; Thu, 4 Dec 2014 21:04:09 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id DE5C32021F for ; Thu, 4 Dec 2014 21:04:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 795E66E3CD; Thu, 4 Dec 2014 13:04:07 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from relay.fireflyinternet.com (hostedrelay.fireflyinternet.com [109.228.30.76]) by gabe.freedesktop.org (Postfix) with ESMTP id F1C986E39A; Thu, 4 Dec 2014 13:04:05 -0800 (PST) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by relay.fireflyinternet.com (FireflyRelay1) with ESMTP id 12584990-1305619 for multiple; Thu, 04 Dec 2014 21:03:40 +0000 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Subject: [PATCH] drm: Make drm_read() more robust against multithreaded races Date: Thu, 4 Dec 2014 21:03:25 +0000 Message-Id: <1417727005-26301-1-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.1.3 In-Reply-To: References: X-Authenticated-User: chris.alporthouse@surfanytime.net Cc: dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The current implementation of drm_read() faces a number of issues: 1. Upon an error, it consumes the event which may lead to the client blocking. 2. Upon an error, it forgets about events already copied 3. If it fails to copy a single event with O_NONBLOCK it falls into a infinite loop of reporting EAGAIN. 3. There is a race between multiple waiters and blocking reads of the events list. Here, we inline drm_dequeue_event() into drm_read() so that we can take the spinlock around the list walking and event copying, and importantly reorder the error handling to avoid the issues above. Cc: Takashi Iwai Signed-off-by: Chris Wilson Reviewed-by: Takashi Iwai --- drivers/gpu/drm/drm_fops.c | 90 ++++++++++++++++++++++------------------------ 1 file changed, 43 insertions(+), 47 deletions(-) diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c index 91e1105f2800..076dd606b580 100644 --- a/drivers/gpu/drm/drm_fops.c +++ b/drivers/gpu/drm/drm_fops.c @@ -478,63 +478,59 @@ int drm_release(struct inode *inode, struct file *filp) } EXPORT_SYMBOL(drm_release); -static bool -drm_dequeue_event(struct drm_file *file_priv, - size_t total, size_t max, struct drm_pending_event **out) +ssize_t drm_read(struct file *filp, char __user *buffer, + size_t count, loff_t *offset) { + struct drm_file *file_priv = filp->private_data; struct drm_device *dev = file_priv->minor->dev; - struct drm_pending_event *e; - unsigned long flags; - bool ret = false; - - spin_lock_irqsave(&dev->event_lock, flags); + ssize_t ret = 0; - *out = NULL; - if (list_empty(&file_priv->event_list)) - goto out; - e = list_first_entry(&file_priv->event_list, - struct drm_pending_event, link); - if (e->event->length + total > max) - goto out; + if (!access_ok(VERIFY_WRITE, buffer, count)) + return -EFAULT; - file_priv->event_space += e->event->length; - list_del(&e->link); - *out = e; - ret = true; + spin_lock_irq(&dev->event_lock); + for (;;) { + if (list_empty(&file_priv->event_list)) { + if (ret) + break; -out: - spin_unlock_irqrestore(&dev->event_lock, flags); - return ret; -} - -ssize_t drm_read(struct file *filp, char __user *buffer, - size_t count, loff_t *offset) -{ - struct drm_file *file_priv = filp->private_data; - struct drm_pending_event *e; - size_t total; - ssize_t ret; + if (filp->f_flags & O_NONBLOCK) { + ret = -EAGAIN; + break; + } - if ((filp->f_flags & O_NONBLOCK) == 0) { - ret = wait_event_interruptible(file_priv->event_wait, - !list_empty(&file_priv->event_list)); - if (ret < 0) - return ret; - } + spin_unlock_irq(&dev->event_lock); + ret = wait_event_interruptible(file_priv->event_wait, + !list_empty(&file_priv->event_list)); + spin_lock_irq(&dev->event_lock); + if (ret < 0) + break; + + ret = 0; + } else { + struct drm_pending_event *e; + + e = list_first_entry(&file_priv->event_list, + struct drm_pending_event, link); + if (e->event->length + ret > count) + break; + + if (__copy_to_user_inatomic(buffer + ret, + e->event, e->event->length)) { + if (ret == 0) + ret = -EFAULT; + break; + } - total = 0; - while (drm_dequeue_event(file_priv, total, count, &e)) { - if (copy_to_user(buffer + total, - e->event, e->event->length)) { - total = -EFAULT; - break; + file_priv->event_space += e->event->length; + ret += e->event->length; + list_del(&e->link); + e->destroy(e); } - - total += e->event->length; - e->destroy(e); } + spin_unlock_irq(&dev->event_lock); - return total ?: -EAGAIN; + return ret; } EXPORT_SYMBOL(drm_read);