From patchwork Fri Jan 23 14:22:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 5694361 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 590ADC058D for ; Fri, 23 Jan 2015 14:26:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6FC09202B8 for ; Fri, 23 Jan 2015 14:26:45 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7866D20268 for ; Fri, 23 Jan 2015 14:26:44 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YEf9r-0002oW-PR; Fri, 23 Jan 2015 14:24:23 +0000 Received: from mail-wg0-f47.google.com ([74.125.82.47]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YEf8l-0001x7-Vs for linux-arm-kernel@lists.infradead.org; Fri, 23 Jan 2015 14:23:17 +0000 Received: by mail-wg0-f47.google.com with SMTP id n12so7771160wgh.6 for ; Fri, 23 Jan 2015 06:22:53 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tDfWzxw5/DJi0738bG9AhJZ/qxvNlYfPLkjFlVbDNIg=; b=YDjP9DowTFxmtqMSaLdOHGZNLELkVFLwWZsvuZeAbq/4ngBMYxYrrPDG82eHCWAiui tI1y4xxXCIpFiLnO6Bxzam6O8WN9fv79K7hCDa5u3rql5oMms2the8aUJ1Iwsbtb7bSt g/YVv/77XnnpwQdRsKNbTiOl/vVMC1e19vPn5yW4KoH/Ga5TrIqtmRnjw7SLsNnrFsXX CQ4rey6VJcko6F3BeevQ3WI3psQ5iUXyzAGsN/HkRVm31ZJaqWTP0SC5XP78A+kqL5TE 2Yx6OJXRYTSx23JPZOS0EmfQLiyQH60AONQGqy8S+kT1PMPb4vx/1CBeOp7P20ldVWt/ OzlA== X-Gm-Message-State: ALoCoQntYPSjJGuAwojNIvHLZJDrA6VSPRXORPnkFWnCoa0Uo7gMP861xzHlFwRtLgh/IjmYVQGX X-Received: by 10.180.14.136 with SMTP id p8mr4288740wic.20.1422022973666; Fri, 23 Jan 2015 06:22:53 -0800 (PST) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id bj3sm2058188wib.3.2015.01.23.06.22.51 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 23 Jan 2015 06:22:52 -0800 (PST) From: Daniel Thompson To: Thomas Gleixner Subject: [PATCH 3.19-rc2 v15 1/8] irqchip: gic: Optimize locking in gic_raise_softirq Date: Fri, 23 Jan 2015 14:22:25 +0000 Message-Id: <1422022952-31552-2-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1422022952-31552-1-git-send-email-daniel.thompson@linaro.org> References: <1422022952-31552-1-git-send-email-daniel.thompson@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150123_062316_234066_58C8EEC2 X-CRM114-Status: GOOD ( 16.28 ) X-Spam-Score: -0.7 (/) Cc: Daniel Thompson , linaro-kernel@lists.linaro.org, Russell King , Jason Cooper , patches@linaro.org, Marc Zyngier , Catalin Marinas , Will Deacon , linux-kernel@vger.kernel.org, Steven Rostedt , Sumit Semwal , Dmitry Pervushin , Dirk Behme , John Stultz , Tim Sander , Daniel Drake , Stephen Boyd , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently gic_raise_softirq() is locked using upon irq_controller_lock. This lock is primarily used to make register read-modify-write sequences atomic but gic_raise_softirq() uses it instead to ensure that the big.LITTLE migration logic can figure out when it is safe to migrate interrupts between physical cores. This is sub-optimal in closely related ways: 1. No locking at all is required on systems where the b.L switcher is not configured. 2. Finer grain locking can be used on systems where the b.L switcher is present. This patch resolves both of the above by introducing a separate finer grain lock and providing conditionally compiled inlines to lock/unlock it. Signed-off-by: Daniel Thompson Cc: Thomas Gleixner Cc: Jason Cooper Cc: Russell King Cc: Marc Zyngier --- drivers/irqchip/irq-gic.c | 36 +++++++++++++++++++++++++++++++++--- 1 file changed, 33 insertions(+), 3 deletions(-) diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c index d617ee5a3d8a..a9ed64dcc84b 100644 --- a/drivers/irqchip/irq-gic.c +++ b/drivers/irqchip/irq-gic.c @@ -73,6 +73,27 @@ struct gic_chip_data { static DEFINE_RAW_SPINLOCK(irq_controller_lock); /* + * This lock is used by the big.LITTLE migration code to ensure no IPIs + * can be pended on the old core after the map has been updated. + */ +#ifdef CONFIG_BL_SWITCHER +static DEFINE_RAW_SPINLOCK(cpu_map_migration_lock); + +static inline void bl_migration_lock(unsigned long *flags) +{ + raw_spin_lock_irqsave(&cpu_map_migration_lock, *flags); +} + +static inline void bl_migration_unlock(unsigned long flags) +{ + raw_spin_unlock_irqrestore(&cpu_map_migration_lock, flags); +} +#else +static inline void bl_migration_lock(unsigned long *flags) {} +static inline void bl_migration_unlock(unsigned long flags) {} +#endif + +/* * The GIC mapping of CPU interfaces does not necessarily match * the logical CPU numbering. Let's use a mapping as returned * by the GIC itself. @@ -624,7 +645,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq) int cpu; unsigned long flags, map = 0; - raw_spin_lock_irqsave(&irq_controller_lock, flags); + bl_migration_lock(&flags); /* Convert our logical CPU mask into a physical one. */ for_each_cpu(cpu, mask) @@ -639,7 +660,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq) /* this always happens on GIC0 */ writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT); - raw_spin_unlock_irqrestore(&irq_controller_lock, flags); + bl_migration_unlock(flags); } #endif @@ -710,8 +731,17 @@ void gic_migrate_target(unsigned int new_cpu_id) raw_spin_lock(&irq_controller_lock); - /* Update the target interface for this logical CPU */ + /* + * Update the target interface for this logical CPU + * + * From the point we release the cpu_map_migration_lock any new + * SGIs will be pended on the new cpu which makes the set of SGIs + * pending on the old cpu static. That means we can defer the + * migration until after we have released the irq_controller_lock. + */ + raw_spin_lock(&cpu_map_migration_lock); gic_cpu_map[cpu] = 1 << new_cpu_id; + raw_spin_unlock(&cpu_map_migration_lock); /* * Find all the peripheral interrupts targetting the current