From patchwork Fri Aug 21 08:20:40 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 7049731 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id DC5B69F344 for ; Fri, 21 Aug 2015 08:21:11 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E5EBC204E7 for ; Fri, 21 Aug 2015 08:21:10 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id D1C26204E4 for ; Fri, 21 Aug 2015 08:21:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 574F689AC3; Fri, 21 Aug 2015 01:21:08 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from smtp-outbound-1.vmware.com (smtp-outbound-1.vmware.com [208.91.2.12]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4A66888FDA for ; Fri, 21 Aug 2015 01:21:07 -0700 (PDT) Received: from sc9-mailhost3.vmware.com (sc9-mailhost3.vmware.com [10.113.161.73]) by smtp-outbound-1.vmware.com (Postfix) with ESMTP id 1087828E20 for ; Fri, 21 Aug 2015 01:23:21 -0700 (PDT) Received: from EX13-CAS-013.vmware.com (EX13-CAS-013.vmware.com [10.113.191.65]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 0345B40691 for ; Fri, 21 Aug 2015 01:21:06 -0700 (PDT) Received: from ubuntu.localdomain (10.113.160.246) by EX13-MBX-024.vmware.com (10.113.191.44) with Microsoft SMTP Server (TLS) id 15.0.1076.9; Fri, 21 Aug 2015 01:21:04 -0700 From: Thomas Hellstrom To: Subject: [PATCH 1/2] drm/vmwgfx: Fix a circular locking dependency in the fbdev code Date: Fri, 21 Aug 2015 01:20:40 -0700 Message-ID: <1440145241-3367-1-git-send-email-thellstrom@vmware.com> X-Mailer: git-send-email 2.1.0 MIME-Version: 1.0 X-Originating-IP: [10.113.160.246] X-ClientProxiedBy: EX13-CAS-013.vmware.com (10.113.191.65) To EX13-MBX-024.vmware.com (10.113.191.44) Cc: pv-drivers@vmware.com, Thomas Hellstrom , linux-graphics-maintainer@vmware.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-5.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When a user-space process writes directly to the fbdev framebuffer, we hit a circular locking dependency. Fix this by introducing a local delayed work callback so that the defio lock can be released before calling into the modesetting code for a dirty update. Signed-off-by: Thomas Hellstrom Reviewed-by: Sinclair Yeh --- drivers/gpu/drm/vmwgfx/vmwgfx_fb.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c index 042c5b4..3b1faf7 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c @@ -68,8 +68,7 @@ struct vmw_fb_par { struct drm_crtc *crtc; struct drm_connector *con; - - bool local_mode; + struct delayed_work local_work; }; static int vmw_fb_setcolreg(unsigned regno, unsigned red, unsigned green, @@ -167,8 +166,10 @@ static int vmw_fb_blank(int blank, struct fb_info *info) * Dirty code */ -static void vmw_fb_dirty_flush(struct vmw_fb_par *par) +static void vmw_fb_dirty_flush(struct work_struct *work) { + struct vmw_fb_par *par = container_of(work, struct vmw_fb_par, + local_work.work); struct vmw_private *vmw_priv = par->vmw_priv; struct fb_info *info = vmw_priv->fb_info; unsigned long irq_flags; @@ -248,7 +249,6 @@ static void vmw_fb_dirty_mark(struct vmw_fb_par *par, unsigned x1, unsigned y1, unsigned width, unsigned height) { - struct fb_info *info = par->vmw_priv->fb_info; unsigned long flags; unsigned x2 = x1 + width; unsigned y2 = y1 + height; @@ -262,7 +262,8 @@ static void vmw_fb_dirty_mark(struct vmw_fb_par *par, /* if we are active start the dirty work * we share the work with the defio system */ if (par->dirty.active) - schedule_delayed_work(&info->deferred_work, VMW_DIRTY_DELAY); + schedule_delayed_work(&par->local_work, + VMW_DIRTY_DELAY); } else { if (x1 < par->dirty.x1) par->dirty.x1 = x1; @@ -326,9 +327,14 @@ static void vmw_deferred_io(struct fb_info *info, par->dirty.x2 = info->var.xres; par->dirty.y2 = y2; spin_unlock_irqrestore(&par->dirty.lock, flags); - } - vmw_fb_dirty_flush(par); + /* + * Since we've already waited on this work once, try to + * execute asap. + */ + cancel_delayed_work(&par->local_work); + schedule_delayed_work(&par->local_work, 0); + } }; static struct fb_deferred_io vmw_defio = { @@ -601,11 +607,7 @@ static int vmw_fb_set_par(struct fb_info *info) /* If there already was stuff dirty we wont * schedule a new work, so lets do it now */ -#if (defined(VMWGFX_STANDALONE) && defined(VMWGFX_FB_DEFERRED)) - schedule_delayed_work(&par->def_par.deferred_work, 0); -#else - schedule_delayed_work(&info->deferred_work, 0); -#endif + schedule_delayed_work(&par->local_work, 0); out_unlock: if (old_mode) @@ -662,6 +664,7 @@ int vmw_fb_init(struct vmw_private *vmw_priv) vmw_priv->fb_info = info; par = info->par; memset(par, 0, sizeof(*par)); + INIT_DELAYED_WORK(&par->local_work, &vmw_fb_dirty_flush); par->vmw_priv = vmw_priv; par->vmalloc = NULL; par->max_width = fb_width; @@ -784,6 +787,7 @@ int vmw_fb_close(struct vmw_private *vmw_priv) /* ??? order */ fb_deferred_io_cleanup(info); + cancel_delayed_work_sync(&par->local_work); unregister_framebuffer(info); (void) vmw_fb_kms_detach(par, true, true); @@ -811,6 +815,7 @@ int vmw_fb_off(struct vmw_private *vmw_priv) spin_unlock_irqrestore(&par->dirty.lock, flags); flush_delayed_work(&info->deferred_work); + flush_delayed_work(&par->local_work); mutex_lock(&par->bo_mutex); (void) vmw_fb_kms_detach(par, true, false);