From patchwork Fri Oct 14 17:47:55 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lina Iyer X-Patchwork-Id: 9377193 X-Patchwork-Delegate: agross@codeaurora.org Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E280A6022E for ; Fri, 14 Oct 2016 17:48:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D929C2A742 for ; Fri, 14 Oct 2016 17:48:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CDB882A733; Fri, 14 Oct 2016 17:48:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 43C292A70B for ; Fri, 14 Oct 2016 17:48:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932270AbcJNRsU (ORCPT ); Fri, 14 Oct 2016 13:48:20 -0400 Received: from mail-pa0-f42.google.com ([209.85.220.42]:36241 "EHLO mail-pa0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932249AbcJNRsS (ORCPT ); Fri, 14 Oct 2016 13:48:18 -0400 Received: by mail-pa0-f42.google.com with SMTP id ry6so48804519pac.3 for ; Fri, 14 Oct 2016 10:48:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wBfMl6yFxtYWmw9koPOemMDEIptz/oaptOOgPmGQcpU=; b=C/mWG6jEVrLmRjSgxuD1vMlKFjpeW+sE4SlL3uGE1n66j50TyNUSl49CPsAgavK99a P0iXFoDswNJRU7Ux6CqChOS4rxsLZchdfIIEPF1yLL7P7DL1nvun57fjyVTL8DdZGivD 0WJ/ltm8cq0cbf6YfoWneN2GeYJsdX7OAprTk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wBfMl6yFxtYWmw9koPOemMDEIptz/oaptOOgPmGQcpU=; b=OjrgNVN5klWxqjyuxFZGAJMDIJJywjfk6iCsmhhv3qrihCjuaxmrHyvO/7xdCypsei vIBfIa8vd1WyY2Pt7hOSFysIwFXNY30u5MV5WLZ6RfPQQwE+js4KjUDo/k/869c7PJnE tBtBz9kmYeZBFjGOZtJ5KCY4ruBBxt6GUusky5L6bVOIfpFq6D8pgZw13q+5WaTs3piW YBC2MW/xnFZ57yy/2PE/8T4syP4itG+Hs0uZ11tRHXRgGI0o/DEg+HND/Jy7zXzLSHK3 mmY1x38FPgRz5diIFbWAOcwwNhJx/uZ0yXHNYDTEbAKf5JfG7JoN6ruvUFQ0zAyE83aQ ek6w== X-Gm-Message-State: AA6/9Rm5pifW+pcXQYJVYzVZFaUfWguuB2Ui7YCHc61uRRX0VPPTb7h1NXg4/xe/bjNpewhy X-Received: by 10.66.221.97 with SMTP id qd1mr16534280pac.120.1476467297864; Fri, 14 Oct 2016 10:48:17 -0700 (PDT) Received: from ubuntu.localdomain (i-global254.qualcomm.com. [199.106.103.254]) by smtp.gmail.com with ESMTPSA id q12sm6725466pfd.7.2016.10.14.10.48.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 14 Oct 2016 10:48:16 -0700 (PDT) From: Lina Iyer To: ulf.hansson@linaro.org, khilman@kernel.org, rjw@rjwysocki.net, linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: andy.gross@linaro.org, sboyd@codeaurora.org, linux-arm-msm@vger.kernel.org, brendan.jackman@arm.com, lorenzo.pieralisi@arm.com, sudeep.holla@arm.com, Juri.Lelli@arm.com, Lina Iyer Subject: [PATCH v3 7/8] PM / Domains: Support IRQ safe PM domains Date: Fri, 14 Oct 2016 10:47:55 -0700 Message-Id: <1476467276-75094-8-git-send-email-lina.iyer@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1476467276-75094-1-git-send-email-lina.iyer@linaro.org> References: <1476467276-75094-1-git-send-email-lina.iyer@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Generic Power Domains currently support turning on/off only in process context. This prevents the usage of PM domains for domains that could be powered on/off in a context where IRQs are disabled. Many such domains exist today and do not get powered off, when the IRQ safe devices in that domain are powered off, because of this limitation. However, not all domains can operate in IRQ safe contexts. Genpd therefore, has to support both cases where the domain may or may not operate in IRQ safe contexts. Configuring genpd to use an appropriate lock for that domain, would allow domains that have IRQ safe devices to runtime suspend and resume, in atomic context. To achieve domain specific locking, set the domain's ->flag to GENPD_FLAG_IRQ_SAFE while defining the domain. This indicates that genpd should use a spinlock instead of a mutex for locking the domain. Locking is abstracted through genpd_lock() and genpd_unlock() functions that use the flag to determine the appropriate lock to be used for that domain. Domains that have lower latency to suspend and resume and can operate with IRQs disabled may now be able to save power, when the component devices and sub-domains are idle at runtime. The restriction this imposes on the domain hierarchy is that non-IRQ safe domains may not have IRQ-safe subdomains, but IRQ safe domains may have IRQ safe and non-IRQ safe subdomains and devices. Cc: Ulf Hansson Cc: Kevin Hilman Cc: Rafael J. Wysocki Signed-off-by: Lina Iyer Acked-by: Ulf Hansson --- drivers/base/power/domain.c | 111 ++++++++++++++++++++++++++++++++++++++++---- include/linux/pm_domain.h | 10 +++- 2 files changed, 110 insertions(+), 11 deletions(-) diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index 4194012..aac656a 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -74,11 +74,70 @@ static const struct genpd_lock_ops genpd_mtx_ops = { .unlock = genpd_unlock_mtx, }; +static void genpd_lock_spin(struct generic_pm_domain *genpd) + __acquires(&genpd->slock) +{ + unsigned long flags; + + spin_lock_irqsave(&genpd->slock, flags); + genpd->lock_flags = flags; +} + +static void genpd_lock_nested_spin(struct generic_pm_domain *genpd, + int depth) + __acquires(&genpd->slock) +{ + unsigned long flags; + + spin_lock_irqsave_nested(&genpd->slock, flags, depth); + genpd->lock_flags = flags; +} + +static int genpd_lock_interruptible_spin(struct generic_pm_domain *genpd) + __acquires(&genpd->slock) +{ + unsigned long flags; + + spin_lock_irqsave(&genpd->slock, flags); + genpd->lock_flags = flags; + return 0; +} + +static void genpd_unlock_spin(struct generic_pm_domain *genpd) + __releases(&genpd->slock) +{ + spin_unlock_irqrestore(&genpd->slock, genpd->lock_flags); +} + +static const struct genpd_lock_ops genpd_spin_ops = { + .lock = genpd_lock_spin, + .lock_nested = genpd_lock_nested_spin, + .lock_interruptible = genpd_lock_interruptible_spin, + .unlock = genpd_unlock_spin, +}; + #define genpd_lock(p) p->lock_ops->lock(p) #define genpd_lock_nested(p, d) p->lock_ops->lock_nested(p, d) #define genpd_lock_interruptible(p) p->lock_ops->lock_interruptible(p) #define genpd_unlock(p) p->lock_ops->unlock(p) +#define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) + +static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, + struct generic_pm_domain *genpd) +{ + bool ret; + + ret = pm_runtime_is_irq_safe(dev) && !genpd_is_irq_safe(genpd); + + /* Warn once for each IRQ safe dev in no sleep domain */ + if (ret) + dev_warn_once(dev, "PM domain %s will not be powered off\n", + genpd->name); + + return ret; +} + /* * Get the generic PM domain for a particular struct device. * This validates the struct device pointer, the PM domain pointer, @@ -343,7 +402,12 @@ static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async) if (stat > PM_QOS_FLAGS_NONE) return -EBUSY; - if (!pm_runtime_suspended(pdd->dev) || pdd->dev->power.irq_safe) + /* + * Do not allow PM domain to be powered off, when an IRQ safe + * device is part of a non-IRQ safe domain. + */ + if (!pm_runtime_suspended(pdd->dev) || + irq_safe_dev_in_no_sleep_domain(pdd->dev, genpd)) not_suspended++; } @@ -506,10 +570,10 @@ static int genpd_runtime_suspend(struct device *dev) } /* - * If power.irq_safe is set, this routine will be run with interrupts - * off, so it can't use mutexes. + * If power.irq_safe is set, this routine may be run with + * IRQs disabled, so suspend only if the PM domain also is irq_safe. */ - if (dev->power.irq_safe) + if (irq_safe_dev_in_no_sleep_domain(dev, genpd)) return 0; genpd_lock(genpd); @@ -543,8 +607,11 @@ static int genpd_runtime_resume(struct device *dev) if (IS_ERR(genpd)) return -EINVAL; - /* If power.irq_safe, the PM domain is never powered off. */ - if (dev->power.irq_safe) { + /* + * As we don't power off a non IRQ safe domain, which holds + * an IRQ safe device, we don't need to restore power to it. + */ + if (irq_safe_dev_in_no_sleep_domain(dev, genpd)) { timed = false; goto out; } @@ -586,7 +653,8 @@ static int genpd_runtime_resume(struct device *dev) err_stop: genpd_stop_dev(genpd, dev); err_poweroff: - if (!dev->power.irq_safe) { + if (!pm_runtime_is_irq_safe(dev) || + (pm_runtime_is_irq_safe(dev) && genpd_is_irq_safe(genpd))) { genpd_lock(genpd); genpd_poweroff(genpd, 0); genpd_unlock(genpd); @@ -1223,6 +1291,17 @@ static int genpd_add_subdomain(struct generic_pm_domain *genpd, || genpd == subdomain) return -EINVAL; + /* + * If the domain can be powered on/off in an IRQ safe + * context, ensure that the subdomain can also be + * powered on/off in that context. + */ + if (!genpd_is_irq_safe(genpd) && genpd_is_irq_safe(subdomain)) { + WARN("Parent %s of subdomain %s must be IRQ safe\n", + genpd->name, subdomain->name); + return -EINVAL; + } + link = kzalloc(sizeof(*link), GFP_KERNEL); if (!link) return -ENOMEM; @@ -1337,6 +1416,17 @@ static int genpd_set_default_power_state(struct generic_pm_domain *genpd) return 0; } +static void genpd_lock_init(struct generic_pm_domain *genpd) +{ + if (genpd->flags & GENPD_FLAG_IRQ_SAFE) { + spin_lock_init(&genpd->slock); + genpd->lock_ops = &genpd_spin_ops; + } else { + mutex_init(&genpd->mlock); + genpd->lock_ops = &genpd_mtx_ops; + } +} + /** * pm_genpd_init - Initialize a generic I/O PM domain object. * @genpd: PM domain object to initialize. @@ -1356,8 +1446,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd, INIT_LIST_HEAD(&genpd->master_links); INIT_LIST_HEAD(&genpd->slave_links); INIT_LIST_HEAD(&genpd->dev_list); - mutex_init(&genpd->mlock); - genpd->lock_ops = &genpd_mtx_ops; + genpd_lock_init(genpd); genpd->gov = gov; INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn); atomic_set(&genpd->sd_count, 0); @@ -2131,7 +2220,9 @@ static int pm_genpd_summary_one(struct seq_file *s, } list_for_each_entry(pm_data, &genpd->dev_list, list_node) { - kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL); + kobj_path = kobject_get_path(&pm_data->dev->kobj, + genpd_is_irq_safe(genpd) ? + GFP_ATOMIC : GFP_KERNEL); if (kobj_path == NULL) continue; diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index 811b968..81ece61 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -15,9 +15,11 @@ #include #include #include +#include /* Defines used for the flags field in the struct generic_pm_domain */ #define GENPD_FLAG_PM_CLK (1U << 0) /* PM domain uses PM clk */ +#define GENPD_FLAG_IRQ_SAFE (1U << 1) /* PM domain operates in atomic */ enum gpd_status { GPD_STATE_ACTIVE = 0, /* PM domain is active */ @@ -76,7 +78,13 @@ struct generic_pm_domain { unsigned int state_idx; /* state that genpd will go to when off */ void *free; /* Free the state that was allocated for default */ const struct genpd_lock_ops *lock_ops; - struct mutex mlock; + union { + struct mutex mlock; + struct { + spinlock_t slock; + unsigned long lock_flags; + }; + }; };