From patchwork Wed Aug 5 16:32:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lina Iyer X-Patchwork-Id: 6951801 X-Patchwork-Delegate: agross@codeaurora.org Return-Path: X-Original-To: patchwork-linux-arm-msm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 20C47C05AD for ; Wed, 5 Aug 2015 16:34:51 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3619820495 for ; Wed, 5 Aug 2015 16:34:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3EC58204AB for ; Wed, 5 Aug 2015 16:34:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753384AbbHEQeK (ORCPT ); Wed, 5 Aug 2015 12:34:10 -0400 Received: from mail-pa0-f54.google.com ([209.85.220.54]:35707 "EHLO mail-pa0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752166AbbHEQdR (ORCPT ); Wed, 5 Aug 2015 12:33:17 -0400 Received: by pabxd6 with SMTP id xd6so22267382pab.2 for ; Wed, 05 Aug 2015 09:33:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=15UajLzMGZE9ZhJKJDVNuMeDNEHeE8TRASaAWgXDDGI=; b=YvQ7R+JzBZBvf2PYWqXsi5tEYiq7veWYyWIcVsylwFsdUx2a89+nLZLnklvifuiqQC ZzI6EhLCp/7CUKLuko9p0B8On12k6en+CZd08U+u5NCYsNn95pxlgeoDKqydv0hvPK/I wJwVOiqnoyLOgVmo5CY10mCUbZUsG/SKBu/d6MgOBIf9Gaiyg3Yp1oPcZP+aX1+cI1CJ P0E98S4hj+OLSVzSwVCgutAnqU64KcKCngyogIGo4cFlrBXBIAb9kbsXYYhRPEzSKKxZ IXNid7BiMaoHt7Dh5RhOU3vwyb430qetb1kf1n75pQv6w0LQCk9VRC4hfH3O1mIOOG8I ClRg== X-Gm-Message-State: ALoCoQnOk+Ayxs2pghI5i7R2z2QhsjscAF+aeXU8S7lLS6ItbwRFnS7uUbwAQPAxJMdbDm2YJouX X-Received: by 10.68.184.197 with SMTP id ew5mr21092629pbc.145.1438792396696; Wed, 05 Aug 2015 09:33:16 -0700 (PDT) Received: from ubuntu.localdomain (i-global254.qualcomm.com. [199.106.103.254]) by smtp.gmail.com with ESMTPSA id oq10sm3434036pdb.75.2015.08.05.09.33.14 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 05 Aug 2015 09:33:15 -0700 (PDT) From: Lina Iyer To: linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, devicetree@vger.kernel.org, linux-pm@vger.kernel.org Cc: agross@codeaurora.org, msivasub@codeaurora.org, sboyd@codeaurora.org, robh@kernel.org, Lina Iyer , Jeffrey Hugo , Bjorn Andersson Subject: [PATCH RFC 08/10] hwspinlock: qcom: Lock #7 is special lock, uses dynamic proc_id Date: Wed, 5 Aug 2015 10:32:44 -0600 Message-Id: <1438792366-2737-9-git-send-email-lina.iyer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1438792366-2737-1-git-send-email-lina.iyer@linaro.org> References: <438731339-58317-1-git-send-email-lina.iyer@linaro.org> <1438792366-2737-1-git-send-email-lina.iyer@linaro.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org X-Spam-Status: No, score=-7.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hwspinlocks are widely used between processors in an SoC, and also between elevation levels within in the same processor. QCOM SoC's use hwspinlock to serialize entry into a low power mode when the context switches from Linux to secure monitor. Lock #7 has been assigned for this purpose. In order to differentiate between one cpu core holding a lock while another cpu is contending for the same lock, the proc id written into the lock is (128 + cpu id). This makes it unique value among the cpu cores and therefore when a core locks the hwspinlock, other cores would wait for the lock to be released since they would have a different proc id. This value is specific for the lock #7 only. Declare lock #7 as raw capable, so the hwspinlock framework would not enfore acquiring a s/w spinlock before acquiring the hwspinlock. Cc: Jeffrey Hugo Cc: Bjorn Andersson Cc: Andy Gross Signed-off-by: Lina Iyer --- drivers/hwspinlock/qcom_hwspinlock.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/drivers/hwspinlock/qcom_hwspinlock.c b/drivers/hwspinlock/qcom_hwspinlock.c index c752447..573f0dd 100644 --- a/drivers/hwspinlock/qcom_hwspinlock.c +++ b/drivers/hwspinlock/qcom_hwspinlock.c @@ -25,16 +25,26 @@ #include "hwspinlock_internal.h" -#define QCOM_MUTEX_APPS_PROC_ID 1 -#define QCOM_MUTEX_NUM_LOCKS 32 +#define QCOM_MUTEX_APPS_PROC_ID 1 +#define QCOM_MUTEX_CPUIDLE_OFFSET 128 +#define QCOM_CPUIDLE_LOCK 7 +#define QCOM_MUTEX_NUM_LOCKS 32 + +static inline u32 __qcom_get_proc_id(struct hwspinlock *lock) +{ + return hwspin_lock_get_id(lock) == QCOM_CPUIDLE_LOCK ? + (QCOM_MUTEX_CPUIDLE_OFFSET + smp_processor_id()) : + QCOM_MUTEX_APPS_PROC_ID; +} static int qcom_hwspinlock_trylock(struct hwspinlock *lock) { struct regmap_field *field = lock->priv; u32 lock_owner; int ret; + u32 proc_id = __qcom_get_proc_id(lock); - ret = regmap_field_write(field, QCOM_MUTEX_APPS_PROC_ID); + ret = regmap_field_write(field, proc_id); if (ret) return ret; @@ -42,7 +52,7 @@ static int qcom_hwspinlock_trylock(struct hwspinlock *lock) if (ret) return ret; - return lock_owner == QCOM_MUTEX_APPS_PROC_ID; + return lock_owner == proc_id; } static void qcom_hwspinlock_unlock(struct hwspinlock *lock) @@ -57,7 +67,7 @@ static void qcom_hwspinlock_unlock(struct hwspinlock *lock) return; } - if (lock_owner != QCOM_MUTEX_APPS_PROC_ID) { + if (lock_owner != __qcom_get_proc_id(lock)) { pr_err("%s: spinlock not owned by us (actual owner is %d)\n", __func__, lock_owner); } @@ -129,6 +139,8 @@ static int qcom_hwspinlock_probe(struct platform_device *pdev) regmap, field); } + bank->lock[QCOM_CPUIDLE_LOCK].hwcaps = HWL_CAP_ALLOW_RAW; + pm_runtime_enable(&pdev->dev); ret = hwspin_lock_register(bank, &pdev->dev, &qcom_hwspinlock_ops,