From patchwork Wed Oct 10 15:04:59 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?VmlsbGUgU3lyasOkbMOk?= X-Patchwork-Id: 1574771 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by patchwork2.kernel.org (Postfix) with ESMTP id 89441DFB34 for ; Wed, 10 Oct 2012 15:28:55 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 64C20A02AC for ; Wed, 10 Oct 2012 08:28:55 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by gabe.freedesktop.org (Postfix) with ESMTP id B612B9E938 for ; Wed, 10 Oct 2012 08:05:18 -0700 (PDT) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 10 Oct 2012 08:05:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.80,564,1344236400"; d="scan'208";a="233012398" Received: from stinkbox.fi.intel.com (HELO stinkbox) ([10.237.72.168]) by fmsmga002.fm.intel.com with SMTP; 10 Oct 2012 08:05:15 -0700 Received: by stinkbox (sSMTP sendmail emulation); Wed, 10 Oct 2012 18:05:14 +0300 From: ville.syrjala@linux.intel.com To: dri-devel@lists.freedesktop.org Subject: [RFC PATCH] drm/i915: Add atomic page flip support Date: Wed, 10 Oct 2012 18:04:59 +0300 Message-Id: <1349881499-5711-4-git-send-email-ville.syrjala@linux.intel.com> X-Mailer: git-send-email 1.7.8.6 In-Reply-To: <1349881499-5711-1-git-send-email-ville.syrjala@linux.intel.com> References: <1349881499-5711-1-git-send-email-ville.syrjala@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org From: Ville Syrjälä Utilize drm_flip to implement "atomic page flip". When involving multiple planes on one pipe, the operations on the planes must be synchronized via software since the hardware doesn't provide the means. drm_flip is used to make that happen, and to track the progress of the flip operations. Signed-off-by: Ville Syrjälä --- drivers/gpu/drm/i915/i915_dma.c | 5 + drivers/gpu/drm/i915/i915_drv.h | 4 + drivers/gpu/drm/i915/i915_irq.c | 18 +- drivers/gpu/drm/i915/intel_atomic.c | 813 +++++++++++++++++++++++++++++++++- drivers/gpu/drm/i915/intel_display.c | 2 + drivers/gpu/drm/i915/intel_drv.h | 6 + 6 files changed, 832 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c index e958e54..79ad32d 100644 --- a/drivers/gpu/drm/i915/i915_dma.c +++ b/drivers/gpu/drm/i915/i915_dma.c @@ -1762,6 +1762,8 @@ int i915_driver_open(struct drm_device *dev, struct drm_file *file) idr_init(&file_priv->context_idr); + INIT_LIST_HEAD(&file_priv->pending_flips); + return 0; } @@ -1792,10 +1794,13 @@ void i915_driver_lastclose(struct drm_device * dev) i915_dma_cleanup(dev); } +void intel_atomic_free_events(struct drm_device *dev, struct drm_file *file); + void i915_driver_preclose(struct drm_device * dev, struct drm_file *file_priv) { i915_gem_context_close(dev, file_priv); i915_gem_release(dev, file_priv); + intel_atomic_free_events(dev, file_priv); } void i915_driver_postclose(struct drm_device *dev, struct drm_file *file) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 57e4894..80645df 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -36,6 +36,7 @@ #include #include #include +#include #include #include #include @@ -845,6 +846,8 @@ typedef struct drm_i915_private { struct work_struct parity_error_work; bool hw_contexts_disabled; uint32_t hw_context_size; + + struct drm_flip_driver flip_driver; } drm_i915_private_t; /* Iterate over initialised rings */ @@ -1055,6 +1058,7 @@ struct drm_i915_file_private { struct list_head request_list; } mm; struct idr context_idr; + struct list_head pending_flips; }; #define INTEL_INFO(dev) (((struct drm_i915_private *) (dev)->dev_private)->info) diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 23f2ea0..f816dab 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -37,6 +37,8 @@ #include "i915_trace.h" #include "intel_drv.h" +void intel_atomic_handle_vblank(struct drm_device *dev, int pipe); + /* For display hotplug interrupt */ static void ironlake_enable_display_irq(drm_i915_private_t *dev_priv, u32 mask) @@ -547,8 +549,10 @@ static irqreturn_t valleyview_irq_handler(DRM_IRQ_ARGS) spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); for_each_pipe(pipe) { - if (pipe_stats[pipe] & PIPE_VBLANK_INTERRUPT_STATUS) + if (pipe_stats[pipe] & PIPE_VBLANK_INTERRUPT_STATUS) { drm_handle_vblank(dev, pipe); + intel_atomic_handle_vblank(dev, pipe); + } if (pipe_stats[pipe] & PLANE_FLIPDONE_INT_STATUS_VLV) { intel_prepare_page_flip(dev, pipe); @@ -685,8 +689,10 @@ static irqreturn_t ivybridge_irq_handler(DRM_IRQ_ARGS) intel_prepare_page_flip(dev, i); intel_finish_page_flip_plane(dev, i); } - if (de_iir & (DE_PIPEA_VBLANK_IVB << (5 * i))) + if (de_iir & (DE_PIPEA_VBLANK_IVB << (5 * i))) { drm_handle_vblank(dev, i); + intel_atomic_handle_vblank(dev, i); + } } /* check event from PCH */ @@ -778,11 +784,15 @@ static irqreturn_t ironlake_irq_handler(DRM_IRQ_ARGS) intel_finish_page_flip_plane(dev, 1); } - if (de_iir & DE_PIPEA_VBLANK) + if (de_iir & DE_PIPEA_VBLANK) { drm_handle_vblank(dev, 0); + intel_atomic_handle_vblank(dev, 0); + } - if (de_iir & DE_PIPEB_VBLANK) + if (de_iir & DE_PIPEB_VBLANK) { drm_handle_vblank(dev, 1); + intel_atomic_handle_vblank(dev, 1); + } /* check event from PCH */ if (de_iir & DE_PCH_EVENT) { diff --git a/drivers/gpu/drm/i915/intel_atomic.c b/drivers/gpu/drm/i915/intel_atomic.c index 363018f..9fa95d3 100644 --- a/drivers/gpu/drm/i915/intel_atomic.c +++ b/drivers/gpu/drm/i915/intel_atomic.c @@ -3,6 +3,7 @@ #include #include +#include #include "intel_drv.h" @@ -24,12 +25,29 @@ static struct drm_property *prop_cursor_y; static struct drm_property *prop_cursor_w; static struct drm_property *prop_cursor_h; +struct intel_flip { + struct drm_flip base; + u32 vbl_count; + bool vblank_ref; + bool has_cursor; + struct drm_crtc *crtc; + struct drm_plane *plane; + struct drm_i915_gem_object *old_bo; + struct drm_i915_gem_object *old_cursor_bo; + struct drm_pending_atomic_event *event; + uint32_t old_fb_id; + struct list_head pending_head; +}; + struct intel_plane_state { struct drm_plane *plane; struct drm_framebuffer *old_fb; struct intel_plane_coords coords; bool dirty; bool pinned; + bool need_event; + struct drm_pending_atomic_event *event; + struct intel_flip *flip; }; struct intel_crtc_state { @@ -44,6 +62,9 @@ struct intel_crtc_state { bool cursor_pinned; unsigned long connectors_bitmask; unsigned long encoders_bitmask; + bool need_event; + struct drm_pending_atomic_event *event; + struct intel_flip *flip; }; struct intel_atomic_state { @@ -269,6 +290,12 @@ static int plane_set(struct intel_atomic_state *s, struct drm_plane *plane = state->plane; struct drm_mode_object *obj; + /* + * always send an event when user sets the state of an object, + * even if that state doesn't actually change. + */ + state->need_event = true; + if (prop == prop_src_x) { if (plane->src_x == value) return 0; @@ -362,6 +389,12 @@ static int crtc_set(struct intel_atomic_state *s, const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private; struct drm_mode_object *obj; + /* + * always send an event when user sets the state of an object, + * even if that state doesn't actually change. + */ + state->need_event = true; + if (prop == prop_src_x) { if (crtc->x == value) return 0; @@ -847,6 +880,119 @@ static void update_plane_obj(struct drm_device *dev, void _intel_disable_plane(struct drm_plane *plane, bool unpin); +static struct drm_pending_atomic_event *alloc_event(struct drm_device *dev, + struct drm_file *file_priv, + uint64_t user_data) +{ + struct drm_pending_atomic_event *e; + unsigned long flags; + + spin_lock_irqsave(&dev->event_lock, flags); + + if (file_priv->event_space < sizeof e->event) { + spin_unlock_irqrestore(&dev->event_lock, flags); + return ERR_PTR(-ENOSPC); + } + + file_priv->event_space -= sizeof e->event; + spin_unlock_irqrestore(&dev->event_lock, flags); + + e = kzalloc(sizeof *e, GFP_KERNEL); + if (!e) { + spin_lock_irqsave(&dev->event_lock, flags); + file_priv->event_space += sizeof e->event; + spin_unlock_irqrestore(&dev->event_lock, flags); + + return ERR_PTR(-ENOMEM); + } + + e->event.base.type = DRM_EVENT_ATOMIC_COMPLETE; + e->event.base.length = sizeof e->event; + e->event.user_data = user_data; + e->base.event = &e->event.base; + e->base.file_priv = file_priv; + e->base.destroy = (void (*) (struct drm_pending_event *)) kfree; + + return e; +} + +static void free_event(struct drm_pending_atomic_event *e) +{ + e->base.file_priv->event_space += sizeof e->event; + kfree(e); +} + +void intel_atomic_free_events(struct drm_device *dev, struct drm_file *file) +{ + struct drm_i915_file_private *file_priv = file->driver_priv; + struct intel_flip *intel_flip, *next; + + spin_lock_irq(&dev->event_lock); + + list_for_each_entry_safe(intel_flip, next, &file_priv->pending_flips, pending_head) { + free_event(intel_flip->event); + intel_flip->event = NULL; + list_del_init(&intel_flip->pending_head); + } + + spin_unlock_irq(&dev->event_lock); +} + +static void queue_event(struct drm_device *dev, struct drm_crtc *crtc, + struct drm_pending_atomic_event *e) +{ + int pipe = to_intel_crtc(crtc)->pipe; + struct timeval tvbl; + + /* FIXME this is wrong for flips that are completed not at vblank */ + e->event.sequence = drm_vblank_count_and_time(dev, pipe, &tvbl); + e->event.tv_sec = tvbl.tv_sec; + e->event.tv_usec = tvbl.tv_usec; + + list_add_tail(&e->base.link, &e->base.file_priv->event_list); + wake_up_interruptible(&e->base.file_priv->event_wait); +} + +static void queue_remaining_events(struct drm_device *dev, struct intel_atomic_state *s) +{ + int i; + + for (i = 0; i < dev->mode_config.num_crtc; i++) { + struct intel_crtc_state *st = &s->crtc[i]; + + if (st->event) { + if (st->old_fb) + st->event->event.old_fb_id = st->old_fb->base.id; + + spin_lock_irq(&dev->event_lock); + queue_event(dev, st->crtc, st->event); + spin_unlock_irq(&dev->event_lock); + + st->event = NULL; + } + } + + for (i = 0; i < dev->mode_config.num_plane; i++) { + struct intel_plane_state *st = &s->plane[i]; + + if (!st->event) + continue; + + /* FIXME should send the event to the CRTC the plane was on */ + if (!st->plane->crtc) + continue; + + if (st->old_fb) + st->event->event.old_fb_id = st->old_fb->base.id; + + spin_lock_irq(&dev->event_lock); + queue_event(dev, st->plane->crtc, st->event); + spin_unlock_irq(&dev->event_lock); + + st->event = NULL; + } +} + static int apply_config(struct drm_device *dev, struct intel_atomic_state *s) { @@ -964,7 +1110,14 @@ static void restore_state(struct drm_device *dev, i = 0; list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { struct intel_crtc *intel_crtc = to_intel_crtc(crtc); + /* + * A bit of a hack since we don't have + * state separated from the crtc internals + */ + spin_lock_irq(&intel_crtc->flip_helper.driver->lock); + s->saved_crtcs[i].flip_helper = intel_crtc->flip_helper; *intel_crtc = s->saved_crtcs[i++]; + spin_unlock_irq(&intel_crtc->flip_helper.driver->lock); } i = 0; list_for_each_entry(plane, &dev->mode_config.plane_list, head) @@ -1268,17 +1421,145 @@ static void update_props(struct drm_device *dev, } } +static void atomic_pipe_commit(struct drm_device *dev, + struct intel_atomic_state *state, + int pipe); + +static int apply_nonblocking(struct drm_device *dev, struct intel_atomic_state *s) +{ + struct intel_crtc *intel_crtc; + int i; + + for (i = 0; i < dev->mode_config.num_crtc; i++) { + struct intel_crtc_state *st = &s->crtc[i]; + + /* + * FIXME need to think this stuff through. The intel_crtc_page_flip + * vs. intel_disable_crtc handling doesn't make much sense either. + */ + if (st->old_fb && atomic_read(&to_intel_framebuffer(st->old_fb)->obj->pending_flip) != 0) + return -EBUSY; + } + + for (i = 0; i < dev->mode_config.num_plane; i++) { + struct intel_plane_state *st = &s->plane[i]; + + /* + * FIXME need to think this stuff through. The intel_crtc_page_flip + * vs. intel_disable_crtc handling doesn't make much sense either. + */ + if (st->old_fb && atomic_read(&to_intel_framebuffer(st->old_fb)->obj->pending_flip) != 0) + return -EBUSY; + } + + list_for_each_entry(intel_crtc, &dev->mode_config.crtc_list, base.head) + atomic_pipe_commit(dev, s, intel_crtc->pipe); + + /* don't restore the old state in end() */ + s->dirty = false; + + return 0; +} + +static int alloc_flip_data(struct drm_device *dev, struct intel_atomic_state *s) +{ + int i; + + for (i = 0; i < dev->mode_config.num_crtc; i++) { + struct intel_crtc_state *st = &s->crtc[i]; + + if (st->need_event && s->flags & DRM_MODE_ATOMIC_EVENT) { + struct drm_pending_atomic_event *e; + + e = alloc_event(dev, s->file, s->user_data); + if (IS_ERR(e)) + return PTR_ERR(e); + + e->event.obj_id = st->crtc->base.id; + + st->event = e; + } + + if (!st->fb_dirty && !st->mode_dirty && !st->cursor_dirty) + continue; + + st->flip = kzalloc(sizeof *st->flip, GFP_KERNEL); + if (!st->flip) + return -ENOMEM; + } + + + for (i = 0; i < dev->mode_config.num_plane; i++) { + struct intel_plane_state *st = &s->plane[i]; + + if (st->need_event && s->flags & DRM_MODE_ATOMIC_EVENT) { + struct drm_pending_atomic_event *e; + + e = alloc_event(dev, s->file, s->user_data); + if (IS_ERR(e)) + return PTR_ERR(e); + + e->event.obj_id = st->plane->base.id; + + st->event = e; + } + + if (!st->dirty) + continue; + + st->flip = kzalloc(sizeof *st->flip, GFP_KERNEL); + if (!st->flip) + return -ENOMEM; + } + + return 0; +} + +static void free_flip_data(struct drm_device *dev, struct intel_atomic_state *s) +{ + int i; + + for (i = 0; i < dev->mode_config.num_crtc; i++) { + struct intel_crtc_state *st = &s->crtc[i]; + + if (st->event) { + spin_lock_irq(&dev->event_lock); + free_event(st->event); + spin_unlock_irq(&dev->event_lock); + st->event = NULL; + } + + kfree(st->flip); + st->flip = NULL; + } + + for (i = 0; i < dev->mode_config.num_plane; i++) { + struct intel_plane_state *st = &s->plane[i]; + + if (st->event) { + spin_lock_irq(&dev->event_lock); + free_event(st->event); + spin_unlock_irq(&dev->event_lock); + st->event = NULL; + } + + kfree(st->flip); + st->flip = NULL; + } +} + static int intel_atomic_commit(struct drm_device *dev, void *state) { struct intel_atomic_state *s = state; int ret; - if (s->flags & DRM_MODE_ATOMIC_NONBLOCK) - return -ENOSYS; - if (!s->dirty) return 0; + ret = alloc_flip_data(dev, s); + if (ret) + return ret; + ret = pin_fbs(dev, s); if (ret) return ret; @@ -1287,17 +1568,38 @@ static int intel_atomic_commit(struct drm_device *dev, void *state) if (ret) return ret; - /* apply in a blocking manner */ - ret = apply_config(dev, s); - if (ret) { - unpin_cursors(dev, s); - unpin_fbs(dev, s); - s->restore_hw = true; - return ret; + /* try to apply in a non blocking manner */ + if (s->flags & DRM_MODE_ATOMIC_NONBLOCK) { + ret = apply_nonblocking(dev, s); + if (ret) { + unpin_cursors(dev, s); + unpin_fbs(dev, s); + return ret; + } + } else { + /* apply in a blocking manner */ + ret = apply_config(dev, s); + if (ret) { + unpin_cursors(dev, s); + unpin_fbs(dev, s); + s->restore_hw = true; + return ret; + } + + unpin_old_cursors(dev, s); + unpin_old_fbs(dev, s); } - unpin_old_cursors(dev, s); - unpin_old_fbs(dev, s); + /* + * Either we took the blocking code path, or perhaps the state of + * some objects didn't actually change? Nonetheless the user wanted + * events for all objects he touched, so queue up any events that + * are still pending. + * + * FIXME this needs more work. If the previous flip is still pending + * we shouldn't send this event until that flip completes. + */ + queue_remaining_events(dev, s); update_plane_obj(dev, s); @@ -1310,6 +1612,9 @@ static void intel_atomic_end(struct drm_device *dev, void *state) { struct intel_atomic_state *s = state; + /* don't send events when restoring old state */ + free_flip_data(dev, state); + /* restore the state of all objects */ if (s->dirty) restore_state(dev, state); @@ -1351,6 +1656,9 @@ static struct { { &prop_cursor_y, "CURSOR_Y", INT_MIN, INT_MAX }, }; +static void intel_flip_init(struct drm_device *dev); +static void intel_flip_fini(struct drm_device *dev); + int intel_atomic_init(struct drm_device *dev) { struct drm_crtc *crtc; @@ -1424,6 +1732,8 @@ int intel_atomic_init(struct drm_device *dev) dev->driver->atomic_funcs = &intel_atomic_funcs; + intel_flip_init(dev); + return 0; out: @@ -1440,6 +1750,8 @@ void intel_atomic_fini(struct drm_device *dev) { struct drm_crtc *crtc; + intel_flip_fini(dev); + list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { drm_property_destroy_blob(dev, crtc->mode_blob); drm_property_destroy_blob(dev, crtc->connector_ids_blob); @@ -1460,3 +1772,480 @@ void intel_atomic_fini(struct drm_device *dev) drm_property_destroy(dev, prop_src_y); drm_property_destroy(dev, prop_src_x); } + +void intel_plane_calc(struct drm_crtc *crtc, struct drm_framebuffer *fb, int x, int y); +void intel_plane_prepare(struct drm_crtc *crtc); +void intel_plane_commit(struct drm_crtc *crtc); +void intel_sprite_calc(struct drm_plane *plane, struct drm_framebuffer *fb, const struct intel_plane_coords *coords); +void intel_sprite_prepare(struct drm_plane *plane); +void intel_sprite_commit(struct drm_plane *plane); + +enum { + /* somwehat arbitrary value */ + INTEL_VBL_CNT_TIMEOUT = 5, +}; + +static void intel_flip_complete(struct drm_flip *flip) +{ + struct intel_flip *intel_flip = + container_of(flip, struct intel_flip, base); + struct drm_device *dev = intel_flip->crtc->dev; + struct drm_i915_private *dev_priv = dev->dev_private; + struct drm_crtc *crtc = intel_flip->crtc; + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); + int pipe = intel_crtc->pipe; + unsigned long flags; + + spin_lock_irqsave(&dev->event_lock, flags); + + if (intel_flip->event) { + list_del_init(&intel_flip->pending_head); + intel_flip->event->event.old_fb_id = intel_flip->old_fb_id; + queue_event(dev, crtc, intel_flip->event); + } + + spin_unlock_irqrestore(&dev->event_lock, flags); + + if (intel_flip->vblank_ref) + drm_vblank_put(dev, pipe); + + /* Possibly allow rendering to old_bo again */ + if (intel_flip->old_bo) { + if (intel_flip->plane) { + struct intel_plane *intel_plane = to_intel_plane(intel_flip->plane); + /* FIXME need proper numbering for all planes */ + atomic_clear_mask(1 << (16+intel_plane->pipe), &intel_flip->old_bo->pending_flip.counter); + } else + atomic_clear_mask(1 << intel_crtc->plane, &intel_flip->old_bo->pending_flip.counter); + + if (atomic_read(&intel_flip->old_bo->pending_flip) == 0) + wake_up(&dev_priv->pending_flip_queue); + } +} + + +static void intel_flip_finish(struct drm_flip *flip) +{ + struct intel_flip *intel_flip = + container_of(flip, struct intel_flip, base); + struct drm_device *dev = intel_flip->crtc->dev; + + if (intel_flip->old_bo) { + mutex_lock(&dev->struct_mutex); + + intel_unpin_fb_obj(intel_flip->old_bo); + + drm_gem_object_unreference(&intel_flip->old_bo->base); + + mutex_unlock(&dev->struct_mutex); + } + + if (intel_flip->old_cursor_bo) + intel_crtc_cursor_bo_unref(intel_flip->crtc, intel_flip->old_cursor_bo); +} + +static void intel_flip_cleanup(struct drm_flip *flip) +{ + struct intel_flip *intel_flip = + container_of(flip, struct intel_flip, base); + + kfree(intel_flip); +} + +static void intel_flip_driver_flush(struct drm_flip_driver *driver) +{ + struct drm_i915_private *dev_priv = + container_of(driver, struct drm_i915_private, flip_driver); + + /* Flush posted writes */ + I915_READ(PIPEDSL(PIPE_A)); +} + +static bool intel_have_new_frmcount(struct drm_device *dev) +{ + return IS_G4X(dev) || INTEL_INFO(dev)->gen >= 5; +} + +static u32 get_vbl_count(struct drm_crtc *crtc) +{ + struct drm_device *dev = crtc->dev; + struct drm_i915_private *dev_priv = dev->dev_private; + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); + int pipe = intel_crtc->pipe; + + if (intel_have_new_frmcount(dev)) { + return I915_READ(PIPE_FRMCOUNT_GM45(pipe)); + } else { + u32 high, low1, low2, dsl; + unsigned int timeout = 0; + + /* + * FIXME check where the frame counter increments, and if + * it happens in the middle of some line, take appropriate + * measures to get a sensible reading. + */ + + /* All reads must be satisfied during the same frame */ + do { + low1 = I915_READ(PIPEFRAMEPIXEL(pipe)) >> PIPE_FRAME_LOW_SHIFT; + high = I915_READ(PIPEFRAME(pipe)) << 8; + dsl = I915_READ(PIPEDSL(pipe)); + low2 = I915_READ(PIPEFRAMEPIXEL(pipe)) >> PIPE_FRAME_LOW_SHIFT; + } while (low1 != low2 && timeout++ < INTEL_VBL_CNT_TIMEOUT); + + if (timeout >= INTEL_VBL_CNT_TIMEOUT) + dev_warn(dev->dev, + "Timed out while determining VBL count for pipe %d\n", pipe); + + return ((high | low2) + + ((dsl >= crtc->hwmode.crtc_vdisplay) && + (dsl < crtc->hwmode.crtc_vtotal - 1))) & 0xffffff; + } +} + +static unsigned int usecs_to_scanlines(struct drm_crtc *crtc, + unsigned int usecs) +{ + /* paranoia */ + if (!crtc->hwmode.crtc_htotal) + return 1; + + return DIV_ROUND_UP(usecs * crtc->hwmode.clock, + 1000 * crtc->hwmode.crtc_htotal); +} + +static void intel_pipe_vblank_evade(struct drm_crtc *crtc) +{ + struct drm_device *dev = crtc->dev; + struct drm_i915_private *dev_priv = dev->dev_private; + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); + int pipe = intel_crtc->pipe; + /* FIXME needs to be calibrated sensibly */ + u32 min = crtc->hwmode.crtc_vdisplay - usecs_to_scanlines(crtc, 50); + u32 max = crtc->hwmode.crtc_vdisplay - 1; + long timeout = msecs_to_jiffies(3); + u32 val; + + bool vblank_ref = drm_vblank_get(dev, pipe) == 0; + + intel_crtc->vbl_received = false; + + val = I915_READ(PIPEDSL(pipe)); + + while (val >= min && val <= max && timeout > 0) { + local_irq_enable(); + + timeout = wait_event_timeout(intel_crtc->vbl_wait, + intel_crtc->vbl_received, + timeout); + + local_irq_disable(); + + intel_crtc->vbl_received = false; + + val = I915_READ(PIPEDSL(pipe)); + } + + if (vblank_ref) + drm_vblank_put(dev, pipe); + + if (val >= min && val <= max) + dev_warn(dev->dev, + "Page flipping close to vblank start (DSL=%u, VBL=%u)\n", + val, crtc->hwmode.crtc_vdisplay); +} + +static bool vbl_count_after_eq_new(u32 a, u32 b) +{ + return !((a - b) & 0x80000000); +} + +static bool vbl_count_after_eq(u32 a, u32 b) +{ + return !((a - b) & 0x800000); +} + +static bool intel_vbl_check(struct drm_flip *pending_flip, u32 vbl_count) +{ + struct intel_flip *old_intel_flip = + container_of(pending_flip, struct intel_flip, base); + struct drm_device *dev = old_intel_flip->crtc->dev; + + if (intel_have_new_frmcount(dev)) + return vbl_count_after_eq_new(vbl_count, old_intel_flip->vbl_count); + else + return vbl_count_after_eq(vbl_count, old_intel_flip->vbl_count); +} + +static void intel_flip_prepare(struct drm_flip *flip) +{ + struct intel_flip *intel_flip = + container_of(flip, struct intel_flip, base); + + /* FIXME some other pipe/pf stuff could be performed here as well. */ + + /* stage double buffer updates which need arming by something else */ + if (intel_flip->plane) + intel_sprite_prepare(intel_flip->plane); + else + intel_plane_prepare(intel_flip->crtc); +} + +static bool intel_flip_flip(struct drm_flip *flip, + struct drm_flip *pending_flip) +{ + struct intel_flip *intel_flip = container_of(flip, struct intel_flip, base); + struct drm_crtc *crtc = intel_flip->crtc; + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); + struct drm_device *dev = crtc->dev; + int pipe = intel_crtc->pipe; + u32 vbl_count; + + intel_flip->vblank_ref = drm_vblank_get(dev, pipe) == 0; + + vbl_count = get_vbl_count(crtc); + + /* arm all the double buffer registers */ + if (intel_flip->plane) + intel_sprite_commit(intel_flip->plane); + else + intel_plane_commit(crtc); + + if (intel_flip->has_cursor) + intel_crtc_cursor_commit(crtc, + intel_crtc->cursor_handle, + intel_crtc->cursor_width, + intel_crtc->cursor_height, + intel_crtc->cursor_bo, + intel_crtc->cursor_addr); + + /* This flip will happen on the next vblank */ + if (intel_have_new_frmcount(dev)) + intel_flip->vbl_count = vbl_count + 1; + else + intel_flip->vbl_count = (vbl_count + 1) & 0xffffff; + + if (pending_flip) { + struct intel_flip *old_intel_flip = + container_of(pending_flip, struct intel_flip, base); + bool flipped = intel_vbl_check(pending_flip, vbl_count); + + if (!flipped) { + swap(intel_flip->old_fb_id, old_intel_flip->old_fb_id); + swap(intel_flip->old_bo, old_intel_flip->old_bo); + swap(intel_flip->old_cursor_bo, old_intel_flip->old_cursor_bo); + } + + return flipped; + } + + return false; +} + +static bool intel_flip_vblank(struct drm_flip *pending_flip) +{ + struct intel_flip *old_intel_flip = + container_of(pending_flip, struct intel_flip, base); + u32 vbl_count = get_vbl_count(old_intel_flip->crtc); + + return intel_vbl_check(pending_flip, vbl_count); +} + +static const struct drm_flip_helper_funcs intel_flip_funcs = { + .prepare = intel_flip_prepare, + .flip = intel_flip_flip, + .vblank = intel_flip_vblank, + .complete = intel_flip_complete, + .finish = intel_flip_finish, + .cleanup = intel_flip_cleanup, +}; + +static const struct drm_flip_driver_funcs intel_flip_driver_funcs = { + .flush = intel_flip_driver_flush, +}; + +static void intel_flip_init(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + struct intel_crtc *intel_crtc; + struct intel_plane *intel_plane; + + drm_flip_driver_init(&dev_priv->flip_driver, &intel_flip_driver_funcs); + + list_for_each_entry(intel_crtc, &dev->mode_config.crtc_list, base.head) { + init_waitqueue_head(&intel_crtc->vbl_wait); + + drm_flip_helper_init(&intel_crtc->flip_helper, + &dev_priv->flip_driver, &intel_flip_funcs); + } + + list_for_each_entry(intel_plane, &dev->mode_config.plane_list, base.head) + drm_flip_helper_init(&intel_plane->flip_helper, + &dev_priv->flip_driver, &intel_flip_funcs); +} + +static void intel_flip_fini(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + struct intel_crtc *intel_crtc; + struct intel_plane *intel_plane; + + list_for_each_entry(intel_plane, &dev->mode_config.plane_list, base.head) + drm_flip_helper_fini(&intel_plane->flip_helper); + + list_for_each_entry(intel_crtc, &dev->mode_config.crtc_list, base.head) + drm_flip_helper_fini(&intel_crtc->flip_helper); + + drm_flip_driver_fini(&dev_priv->flip_driver); +} + +static void atomic_pipe_commit(struct drm_device *dev, + struct intel_atomic_state *state, + int pipe) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + struct drm_i915_file_private *file_priv = state->file->driver_priv; + LIST_HEAD(flips); + int i; + bool pipe_enabled = to_intel_crtc(intel_get_crtc_for_pipe(dev, pipe))->active; + + for (i = 0; i < dev->mode_config.num_crtc; i++) { + struct intel_crtc_state *st = &state->crtc[i]; + struct drm_crtc *crtc = st->crtc; + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); + struct intel_flip *intel_flip; + + if (!st->fb_dirty && !st->cursor_dirty) + continue; + + if (intel_crtc->pipe != pipe) + continue; + + if (!st->flip) + continue; + + intel_flip = st->flip; + st->flip = NULL; + + drm_flip_init(&intel_flip->base, &intel_crtc->flip_helper); + + if (st->event) { + intel_flip->event = st->event; + st->event = NULL; + /* need to keep track of it in case process exits */ + spin_lock_irq(&dev->event_lock); + list_add_tail(&intel_flip->pending_head, &file_priv->pending_flips); + spin_unlock_irq(&dev->event_lock); + } + + intel_flip->crtc = crtc; + + intel_plane_calc(crtc, crtc->fb, crtc->x, crtc->y); + + if (st->cursor_dirty) { + intel_flip->has_cursor = true; + intel_flip->old_cursor_bo = st->old_cursor_bo; + } + + if (st->old_fb) { + intel_flip->old_fb_id = st->old_fb->base.id; + intel_flip->old_bo = to_intel_framebuffer(st->old_fb)->obj; + + mutex_lock(&dev->struct_mutex); + drm_gem_object_reference(&intel_flip->old_bo->base); + mutex_unlock(&dev->struct_mutex); + + /* Block clients from rendering to the new back buffer until + * the flip occurs and the object is no longer visible. + */ + atomic_set_mask(1 << intel_crtc->plane, &intel_flip->old_bo->pending_flip.counter); + } + + list_add_tail(&intel_flip->base.list, &flips); + } + + for (i = 0; i < dev->mode_config.num_plane; i++) { + struct intel_plane_state *st = &state->plane[i]; + struct drm_plane *plane = st->plane; + struct intel_plane *intel_plane = to_intel_plane(plane); + struct intel_flip *intel_flip; + + if (!st->dirty) + continue; + + if (intel_plane->pipe != pipe) + continue; + + if (!st->flip) + continue; + + intel_flip = st->flip; + st->flip = NULL; + + drm_flip_init(&intel_flip->base, &intel_plane->flip_helper); + + if (st->event) { + intel_flip->event = st->event; + st->event = NULL; + /* need to keep track of it in case process exits */ + spin_lock_irq(&dev->event_lock); + list_add_tail(&intel_flip->pending_head, &file_priv->pending_flips); + spin_unlock_irq(&dev->event_lock); + } + + intel_flip->crtc = intel_get_crtc_for_pipe(dev, pipe); + intel_flip->plane = plane; + + intel_sprite_calc(plane, plane->fb, &st->coords); + + if (st->old_fb) { + intel_flip->old_fb_id = st->old_fb->base.id; + intel_flip->old_bo = to_intel_framebuffer(st->old_fb)->obj; + + mutex_lock(&dev->struct_mutex); + drm_gem_object_reference(&intel_flip->old_bo->base); + mutex_unlock(&dev->struct_mutex); + + /* Block clients from rendering to the new back buffer until + * the flip occurs and the object is no longer visible. + */ + /* FIXME need proper numbering for all planes */ + atomic_set_mask(1 << (16+intel_plane->pipe), &intel_flip->old_bo->pending_flip.counter); + } + + list_add_tail(&intel_flip->base.list, &flips); + } + + if (list_empty(&flips)) + return; + + if (!pipe_enabled) { + drm_flip_driver_complete_flips(&dev_priv->flip_driver, &flips); + return; + } + + drm_flip_driver_prepare_flips(&dev_priv->flip_driver, &flips); + + local_irq_disable(); + + intel_pipe_vblank_evade(intel_get_crtc_for_pipe(dev, pipe)); + + drm_flip_driver_schedule_flips(&dev_priv->flip_driver, &flips); + + local_irq_enable(); +} + +void intel_atomic_handle_vblank(struct drm_device *dev, int pipe) +{ + struct intel_crtc *intel_crtc = to_intel_crtc(intel_get_crtc_for_pipe(dev, pipe)); + struct intel_plane *intel_plane; + + intel_crtc->vbl_received = true; + + drm_flip_helper_vblank(&intel_crtc->flip_helper); + + list_for_each_entry(intel_plane, &dev->mode_config.plane_list, base.head) { + if (intel_plane->pipe == pipe) + drm_flip_helper_vblank(&intel_plane->flip_helper); + } +} diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index 0f0b0c9..2f999c6 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -3351,6 +3351,7 @@ static void ironlake_crtc_disable(struct drm_crtc *crtc) return; intel_crtc_wait_for_pending_flips(crtc); + drm_flip_helper_clear(&intel_crtc->flip_helper); drm_vblank_off(dev, pipe); intel_crtc_update_cursor(crtc, false); @@ -3522,6 +3523,7 @@ static void i9xx_crtc_disable(struct drm_crtc *crtc) /* Give the overlay scaler a chance to disable if it's on this pipe */ intel_crtc_wait_for_pending_flips(crtc); + drm_flip_helper_clear(&intel_crtc->flip_helper); drm_vblank_off(dev, pipe); intel_crtc_dpms_overlay(intel_crtc, false); intel_crtc_update_cursor(crtc, false); diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h index 382cb48..845d0bd 100644 --- a/drivers/gpu/drm/i915/intel_drv.h +++ b/drivers/gpu/drm/i915/intel_drv.h @@ -31,6 +31,7 @@ #include "drm_crtc.h" #include "drm_crtc_helper.h" #include "drm_fb_helper.h" +#include "drm_flip.h" #define _wait_for(COND, MS, W) ({ \ unsigned long timeout__ = jiffies + msecs_to_jiffies(MS); \ @@ -208,6 +209,10 @@ struct intel_crtc { struct intel_pch_pll *pch_pll; struct intel_plane_regs primary_regs; + + struct drm_flip_helper flip_helper; + wait_queue_head_t vbl_wait; + bool vbl_received; }; struct intel_plane_coords { @@ -234,6 +239,7 @@ struct intel_plane { struct drm_intel_sprite_colorkey *key); void (*get_colorkey)(struct drm_plane *plane, struct drm_intel_sprite_colorkey *key); + struct drm_flip_helper flip_helper; }; struct intel_watermark_params {