From patchwork Thu Apr 18 20:53:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rodrigo Vivi X-Patchwork-Id: 10907999 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 15673161F for ; Thu, 18 Apr 2019 20:53:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 003CD28D42 for ; Thu, 18 Apr 2019 20:53:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E8F4628D47; Thu, 18 Apr 2019 20:53:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 815A328D42 for ; Thu, 18 Apr 2019 20:53:54 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F1E986E1D3; Thu, 18 Apr 2019 20:53:53 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9C5726E1BC for ; Thu, 18 Apr 2019 20:53:36 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 Apr 2019 13:53:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,367,1549958400"; d="scan'208";a="224736939" Received: from rdvivi-losangeles.jf.intel.com ([10.7.196.65]) by orsmga001.jf.intel.com with ESMTP; 18 Apr 2019 13:53:35 -0700 From: Rodrigo Vivi To: intel-gfx@lists.freedesktop.org Date: Thu, 18 Apr 2019 13:53:41 -0700 Message-Id: <20190418205347.6402-3-rodrigo.vivi@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190418205347.6402-1-rodrigo.vivi@intel.com> References: <20190418205347.6402-1-rodrigo.vivi@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC 2/8] drm/i915: Move IRQ related stuff from intel_rps to the new intel_irq. X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP The plan is to consolidate all IRQ related stuff together under the new intel_irq. So let's continue with RPS stuff. Signed-off-by: Rodrigo Vivi --- drivers/gpu/drm/i915/i915_drv.h | 8 ++----- drivers/gpu/drm/i915/i915_irq.c | 41 ++++++++++++++++++--------------- 2 files changed, 24 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 0b4aa818d66b..06617a67002c 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -650,16 +650,12 @@ struct intel_rps_ei { struct intel_irq { /* protects the irq masks */ spinlock_t lock; + bool rps_interrupts_enabled; + u32 pm_iir; }; struct intel_rps { - /* - * work, interrupts_enabled and pm_iir are protected by - * dev_priv->irq.lock - */ struct work_struct work; - bool interrupts_enabled; - u32 pm_iir; /* PM interrupt bits that should never be masked */ u32 pm_intrmsk_mbz; diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 679dc63244d9..487ea27ea152 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -519,7 +519,7 @@ void gen11_reset_rps_interrupts(struct drm_i915_private *dev_priv) while (gen11_reset_one_iir(dev_priv, 0, GEN11_GTPM)) ; - dev_priv->gt_pm.rps.pm_iir = 0; + dev_priv->irq.pm_iir = 0; spin_unlock_irq(&dev_priv->irq.lock); } @@ -528,46 +528,47 @@ void gen6_reset_rps_interrupts(struct drm_i915_private *dev_priv) { spin_lock_irq(&dev_priv->irq.lock); gen6_reset_pm_iir(dev_priv, GEN6_PM_RPS_EVENTS); - dev_priv->gt_pm.rps.pm_iir = 0; + dev_priv->irq.pm_iir = 0; spin_unlock_irq(&dev_priv->irq.lock); } void gen6_enable_rps_interrupts(struct drm_i915_private *dev_priv) { - struct intel_rps *rps = &dev_priv->gt_pm.rps; + struct intel_irq *irq = &dev_priv->irq; - if (READ_ONCE(rps->interrupts_enabled)) + if (READ_ONCE(irq->rps_interrupts_enabled)) return; - spin_lock_irq(&dev_priv->irq.lock); - WARN_ON_ONCE(rps->pm_iir); + spin_lock_irq(&irq->lock); + WARN_ON_ONCE(irq->pm_iir); if (INTEL_GEN(dev_priv) >= 11) WARN_ON_ONCE(gen11_reset_one_iir(dev_priv, 0, GEN11_GTPM)); else WARN_ON_ONCE(I915_READ(gen6_pm_iir(dev_priv)) & dev_priv->pm_rps_events); - rps->interrupts_enabled = true; + irq->rps_interrupts_enabled = true; gen6_enable_pm_irq(dev_priv, dev_priv->pm_rps_events); - spin_unlock_irq(&dev_priv->irq.lock); + spin_unlock_irq(&irq->lock); } void gen6_disable_rps_interrupts(struct drm_i915_private *dev_priv) { struct intel_rps *rps = &dev_priv->gt_pm.rps; + struct intel_irq *irq = &dev_priv->irq; - if (!READ_ONCE(rps->interrupts_enabled)) + if (!READ_ONCE(irq->rps_interrupts_enabled)) return; - spin_lock_irq(&dev_priv->irq.lock); - rps->interrupts_enabled = false; + spin_lock_irq(&irq->lock); + irq->rps_interrupts_enabled = false; I915_WRITE(GEN6_PMINTRMSK, gen6_sanitize_rps_pm_mask(dev_priv, ~0u)); gen6_disable_pm_irq(dev_priv, GEN6_PM_RPS_EVENTS); - spin_unlock_irq(&dev_priv->irq.lock); + spin_unlock_irq(&irq->lock); synchronize_irq(dev_priv->drm.irq); /* Now that we will not be generating any more work, flush any @@ -1290,8 +1291,8 @@ static void gen6_pm_rps_work(struct work_struct *work) u32 pm_iir = 0; spin_lock_irq(&dev_priv->irq.lock); - if (rps->interrupts_enabled) { - pm_iir = fetch_and_zero(&rps->pm_iir); + if (dev_priv->irq.rps_interrupts_enabled) { + pm_iir = fetch_and_zero(&dev_priv->irq.pm_iir); client_boost = atomic_read(&rps->num_waiters); } spin_unlock_irq(&dev_priv->irq.lock); @@ -1372,7 +1373,7 @@ static void gen6_pm_rps_work(struct work_struct *work) out: /* Make sure not to corrupt PMIMR state used by ringbuffer on GEN6 */ spin_lock_irq(&dev_priv->irq.lock); - if (rps->interrupts_enabled) + if (dev_priv->irq.rps_interrupts_enabled) gen6_unmask_pm_irq(dev_priv, dev_priv->pm_rps_events); spin_unlock_irq(&dev_priv->irq.lock); } @@ -1843,6 +1844,7 @@ static void i9xx_pipe_crc_irq_handler(struct drm_i915_private *dev_priv, static void gen11_rps_irq_handler(struct drm_i915_private *i915, u32 pm_iir) { struct intel_rps *rps = &i915->gt_pm.rps; + struct intel_irq *irq = &i915->irq; const u32 events = i915->pm_rps_events & pm_iir; lockdep_assert_held(&i915->irq.lock); @@ -1852,22 +1854,23 @@ static void gen11_rps_irq_handler(struct drm_i915_private *i915, u32 pm_iir) gen6_mask_pm_irq(i915, events); - if (!rps->interrupts_enabled) + if (!irq->rps_interrupts_enabled) return; - rps->pm_iir |= events; + irq->pm_iir |= events; schedule_work(&rps->work); } static void gen6_rps_irq_handler(struct drm_i915_private *dev_priv, u32 pm_iir) { struct intel_rps *rps = &dev_priv->gt_pm.rps; + struct intel_irq *irq = &dev_priv->irq; if (pm_iir & dev_priv->pm_rps_events) { spin_lock(&dev_priv->irq.lock); gen6_mask_pm_irq(dev_priv, pm_iir & dev_priv->pm_rps_events); - if (rps->interrupts_enabled) { - rps->pm_iir |= pm_iir & dev_priv->pm_rps_events; + if (irq->rps_interrupts_enabled) { + irq->pm_iir |= pm_iir & dev_priv->pm_rps_events; schedule_work(&rps->work); } spin_unlock(&dev_priv->irq.lock);