From patchwork Wed Mar 4 10:12:49 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 5934911 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A3E3A9F373 for ; Wed, 4 Mar 2015 10:16:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B37692021F for ; Wed, 4 Mar 2015 10:16:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CA12A201B4 for ; Wed, 4 Mar 2015 10:16:24 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YT6JZ-0000pe-Pd; Wed, 04 Mar 2015 10:14:05 +0000 Received: from mail-wi0-f182.google.com ([209.85.212.182]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YT6JH-0000aD-JG for linux-arm-kernel@lists.infradead.org; Wed, 04 Mar 2015 10:13:49 +0000 Received: by wivz2 with SMTP id z2so10012210wiv.0 for ; Wed, 04 Mar 2015 02:13:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2kRdTHN1F0w7zBCAJEqjssYlB8ukX57ovvTJlKtccFQ=; b=jqXexNFh2/poIzK+QF4ojCzY1Z2aq5BT26KUGAfWLsiqCblxZxcNUgmkjlEe0AZKfD k4v889HEeCmdtjne+YhBr7OZku6EF4K1XHy1gRXq/0K6zdxnkVUNLyVLv/D8vpfSWzHK xzD4llrkOeVyxMn+F/E8e2RoJHfBmsujqymQT10zvILaJpO57BiYzLzptX7dKvCw1CP+ RhjeGLR+gG6VRIZYg2iY3pO3DUofeXJjvRaOQgNFthuwmRr7IfF1JBTyoMVSzCTQ0/Sj W1XXYGLt43A331CCkyo7ldzgqCbhXNy26ZhNX06y42bISVIXpP/xp4jhGZjxn5P94e5z WwNw== X-Gm-Message-State: ALoCoQmI9zk4W0PFlDRv0emYVidEn3HD4J4P99V8/MGOeAGTGjuk01EHh9N3lunQyTslAuqGoTj4 X-Received: by 10.194.121.10 with SMTP id lg10mr6322605wjb.71.1425464005262; Wed, 04 Mar 2015 02:13:25 -0800 (PST) Received: from wychelm.lan (cpc4-aztw19-0-0-cust71.18-1.cable.virginm.net. [82.33.25.72]) by mx.google.com with ESMTPSA id n1sm24488228wib.11.2015.03.04.02.13.24 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 04 Mar 2015 02:13:24 -0800 (PST) From: Daniel Thompson To: Thomas Gleixner Subject: [PATCH 4.0-rc1 v17 1/6] irqchip: gic: Optimize locking in gic_raise_softirq Date: Wed, 4 Mar 2015 10:12:49 +0000 Message-Id: <1425463974-23568-2-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1425463974-23568-1-git-send-email-daniel.thompson@linaro.org> References: <1422022952-31552-1-git-send-email-daniel.thompson@linaro.org> <1425463974-23568-1-git-send-email-daniel.thompson@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150304_021347_859022_CDAFEBC4 X-CRM114-Status: GOOD ( 17.01 ) X-Spam-Score: -0.7 (/) Cc: Daniel Thompson , linaro-kernel@lists.linaro.org, Russell King , Jason Cooper , patches@linaro.org, Marc Zyngier , Catalin Marinas , Will Deacon , linux-kernel@vger.kernel.org, Steven Rostedt , Sumit Semwal , Dmitry Pervushin , Dirk Behme , John Stultz , Tim Sander , Daniel Drake , Stephen Boyd , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently gic_raise_softirq() is locked using upon irq_controller_lock. This lock is primarily used to make register read-modify-write sequences atomic but gic_raise_softirq() uses it instead to ensure that the big.LITTLE migration logic can figure out when it is safe to migrate interrupts between physical cores. This is sub-optimal in closely related ways: 1. No locking at all is required on systems where the b.L switcher is not configured. 2. Finer grain locking can be used on systems where the b.L switcher is present. This patch resolves both of the above by introducing a separate finer grain lock and providing conditionally compiled inlines to lock/unlock it. Signed-off-by: Daniel Thompson Cc: Thomas Gleixner Cc: Jason Cooper Cc: Russell King Cc: Marc Zyngier Acked-by: Nicolas Pitre --- drivers/irqchip/irq-gic.c | 36 +++++++++++++++++++++++++++++++++--- 1 file changed, 33 insertions(+), 3 deletions(-) diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c index 4634cf7d0ec3..f2a0b4525b65 100644 --- a/drivers/irqchip/irq-gic.c +++ b/drivers/irqchip/irq-gic.c @@ -73,6 +73,27 @@ struct gic_chip_data { static DEFINE_RAW_SPINLOCK(irq_controller_lock); /* + * This lock is used by the big.LITTLE migration code to ensure no IPIs + * can be pended on the old core after the map has been updated. + */ +#ifdef CONFIG_BL_SWITCHER +static DEFINE_RAW_SPINLOCK(cpu_map_migration_lock); + +static inline void gic_migration_lock(unsigned long *flags) +{ + raw_spin_lock_irqsave(&cpu_map_migration_lock, *flags); +} + +static inline void gic_migration_unlock(unsigned long flags) +{ + raw_spin_unlock_irqrestore(&cpu_map_migration_lock, flags); +} +#else +static inline void gic_migration_lock(unsigned long *flags) {} +static inline void gic_migration_unlock(unsigned long flags) {} +#endif + +/* * The GIC mapping of CPU interfaces does not necessarily match * the logical CPU numbering. Let's use a mapping as returned * by the GIC itself. @@ -627,7 +648,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq) int cpu; unsigned long flags, map = 0; - raw_spin_lock_irqsave(&irq_controller_lock, flags); + gic_migration_lock(&flags); /* Convert our logical CPU mask into a physical one. */ for_each_cpu(cpu, mask) @@ -642,7 +663,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq) /* this always happens on GIC0 */ writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT); - raw_spin_unlock_irqrestore(&irq_controller_lock, flags); + gic_migration_unlock(flags); } #endif @@ -713,8 +734,17 @@ void gic_migrate_target(unsigned int new_cpu_id) raw_spin_lock(&irq_controller_lock); - /* Update the target interface for this logical CPU */ + /* + * Update the target interface for this logical CPU + * + * From the point we release the cpu_map_migration_lock any new + * SGIs will be pended on the new cpu which makes the set of SGIs + * pending on the old cpu static. That means we can defer the + * migration until after we have released the irq_controller_lock. + */ + raw_spin_lock(&cpu_map_migration_lock); gic_cpu_map[cpu] = 1 << new_cpu_id; + raw_spin_unlock(&cpu_map_migration_lock); /* * Find all the peripheral interrupts targetting the current