From patchwork Tue Aug 6 11:44:12 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra K T X-Patchwork-Id: 2839368 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id D0219BF535 for ; Tue, 6 Aug 2013 11:51:35 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E0D6D201EB for ; Tue, 6 Aug 2013 11:51:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 83588201D3 for ; Tue, 6 Aug 2013 11:51:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756078Ab3HFLvR (ORCPT ); Tue, 6 Aug 2013 07:51:17 -0400 Received: from e36.co.us.ibm.com ([32.97.110.154]:59388 "EHLO e36.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753947Ab3HFLvP (ORCPT ); Tue, 6 Aug 2013 07:51:15 -0400 Received: from /spool/local by e36.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 6 Aug 2013 05:51:14 -0600 Received: from d03dlp03.boulder.ibm.com (9.17.202.179) by e36.co.us.ibm.com (192.168.1.136) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 6 Aug 2013 05:51:11 -0600 Received: from d03relay01.boulder.ibm.com (d03relay01.boulder.ibm.com [9.17.195.226]) by d03dlp03.boulder.ibm.com (Postfix) with ESMTP id CFF4519D8046; Tue, 6 Aug 2013 05:50:58 -0600 (MDT) Received: from d03av05.boulder.ibm.com (d03av05.boulder.ibm.com [9.17.195.85]) by d03relay01.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r76BpALH184906; Tue, 6 Aug 2013 05:51:10 -0600 Received: from d03av05.boulder.ibm.com (loopback [127.0.0.1]) by d03av05.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r76Bp4FT006776; Tue, 6 Aug 2013 05:51:10 -0600 Received: from codeblue.in.ibm.com (codeblue.in.ibm.com [9.124.35.25] (may be forged)) by d03av05.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r76BojXB006047; Tue, 6 Aug 2013 05:50:46 -0600 From: Raghavendra K T To: , , , , , , Cc: linux-doc@vger.kernel.org, , Raghavendra K T , , , , , , , , , , , , , , , , , , , Date: Tue, 06 Aug 2013 17:14:12 +0530 Message-Id: <20130806114412.20643.84141.sendpatchset@codeblue.in.ibm.com> In-Reply-To: <20130806114020.20643.57235.sendpatchset@codeblue.in.ibm.com> References: <20130806114020.20643.57235.sendpatchset@codeblue.in.ibm.com> Subject: [PATCH V12 11/14] xen/pvticketlock: Allow interrupts to be enabled while blocking X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13080611-7606-0000-0000-00000E092C78 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-5.2 required=5.0 tests=BAYES_00,KHOP_BIG_TO_CC, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP xen/pvticketlock: Allow interrupts to be enabled while blocking From: Jeremy Fitzhardinge If interrupts were enabled when taking the spinlock, we can leave them enabled while blocking to get the lock. If we can enable interrupts while waiting for the lock to become available, and we take an interrupt before entering the poll, and the handler takes a spinlock which ends up going into the slow state (invalidating the per-cpu "lock" and "want" values), then when the interrupt handler returns the event channel will remain pending so the poll will return immediately, causing it to return out to the main spinlock loop. Signed-off-by: Jeremy Fitzhardinge Reviewed-by: Konrad Rzeszutek Wilk Signed-off-by: Raghavendra K T Acked-by: Ingo Molnar --- arch/x86/xen/spinlock.c | 46 ++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 40 insertions(+), 6 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index 546112e..0438b93 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -142,7 +142,20 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want) * partially setup state. */ local_irq_save(flags); - + /* + * We don't really care if we're overwriting some other + * (lock,want) pair, as that would mean that we're currently + * in an interrupt context, and the outer context had + * interrupts enabled. That has already kicked the VCPU out + * of xen_poll_irq(), so it will just return spuriously and + * retry with newly setup (lock,want). + * + * The ordering protocol on this is that the "lock" pointer + * may only be set non-NULL if the "want" ticket is correct. + * If we're updating "want", we must first clear "lock". + */ + w->lock = NULL; + smp_wmb(); w->want = want; smp_wmb(); w->lock = lock; @@ -157,24 +170,43 @@ static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want) /* Only check lock once pending cleared */ barrier(); - /* Mark entry to slowpath before doing the pickup test to make - sure we don't deadlock with an unlocker. */ + /* + * Mark entry to slowpath before doing the pickup test to make + * sure we don't deadlock with an unlocker. + */ __ticket_enter_slowpath(lock); - /* check again make sure it didn't become free while - we weren't looking */ + /* + * check again make sure it didn't become free while + * we weren't looking + */ if (ACCESS_ONCE(lock->tickets.head) == want) { add_stats(TAKEN_SLOW_PICKUP, 1); goto out; } + + /* Allow interrupts while blocked */ + local_irq_restore(flags); + + /* + * If an interrupt happens here, it will leave the wakeup irq + * pending, which will cause xen_poll_irq() to return + * immediately. + */ + /* Block until irq becomes pending (or perhaps a spurious wakeup) */ xen_poll_irq(irq); add_stats(TAKEN_SLOW_SPURIOUS, !xen_test_irq_pending(irq)); + + local_irq_save(flags); + kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq)); out: cpumask_clear_cpu(cpu, &waiting_cpus); w->lock = NULL; + local_irq_restore(flags); + spin_time_accum_blocked(start); } PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning); @@ -188,7 +220,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next) for_each_cpu(cpu, &waiting_cpus) { const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu); - if (w->lock == lock && w->want == next) { + /* Make sure we read lock before want */ + if (ACCESS_ONCE(w->lock) == lock && + ACCESS_ONCE(w->want) == next) { add_stats(RELEASED_SLOW_KICKED, 1); xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR); break;