From patchwork Thu Dec 22 02:15:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 9484025 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A6862601D5 for ; Thu, 22 Dec 2016 02:17:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 966E1200DF for ; Thu, 22 Dec 2016 02:17:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8B2D226D08; Thu, 22 Dec 2016 02:17:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 188DE200DF for ; Thu, 22 Dec 2016 02:17:49 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cJsuj-0001Wc-KK; Thu, 22 Dec 2016 02:15:25 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cJsuh-0001Vk-Na for xen-devel@lists.xenproject.org; Thu, 22 Dec 2016 02:15:23 +0000 Received: from [85.158.143.35] by server-7.bemta-6.messagelabs.com id C0/C9-29440-A373B585; Thu, 22 Dec 2016 02:15:22 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrNLMWRWlGSWpSXmKPExsVybKJsh66VeXS Ewd/JAhbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8b9K4cYCyYbV7yfupWtgXGfehcjF4eQwFRG ie1z/zFBOAeZJDpf7GTtYuTkYBMwlPj7ZBMbiC0iICGxqWEFE4jNLGAtce/ND3YQW1jATWLW3 wNgcRYBVYnbJyeB2bxA8aWdy1hAbAkBOYmTxyaDzeQUcJfYOW82WK+QQDujRPuuaoiaDIl5PX NYIWwviUU3LkHZahJXz21insDIt4CRYRWjRnFqUVlqka6huV5SUWZ6RkluYmaOrqGBmV5uanF xYnpqTmJSsV5yfu4mRmCgMADBDsbbGwMOMUpyMCmJ8p7Sjo4Q4kvKT6nMSCzOiC8qzUktPsQo w8GhJMH7yhQoJ1iUmp5akZaZAwxZmLQEB4+SCO9RkDRvcUFibnFmOkTqFKOilDiviRlQQgAkk VGaB9cGi5NLjLJSwryMQIcI8RSkFuVmlqDKv2IU52BUEub1BpnCk5lXAjf9FdBiJqDFC7vDQR aXJCKkpBoYc1UjD3dYRi+v54qx7j1zXf6xHoPY8SPaRw+/kpeYajU1wfr1kkUTHmQbJcVyhDy 62Gevqjxx+5LWy179vDvOxV6NLtE8fmVvXvFDj3na4VktCyQWbxZo5k9LP/7iXtPb3X6XD004 6fRkbd3yeJm6NVFJN+vOb19c2Dfp08KlAo423xsOa+z7rcRSnJFoqMVcVJwIAOBygQeOAgAA X-Env-Sender: sstabellini@kernel.org X-Msg-Ref: server-15.tower-21.messagelabs.com!1482372920!49370162!1 X-Originating-IP: [198.145.29.136] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 24486 invoked from network); 22 Dec 2016 02:15:22 -0000 Received: from mail.kernel.org (HELO mail.kernel.org) (198.145.29.136) by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 22 Dec 2016 02:15:21 -0000 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6E57520384; Thu, 22 Dec 2016 02:15:19 +0000 (UTC) Received: from sstabellini-ThinkPad-X260.hsd1.ca.comcast.net (96-82-76-110-static.hfc.comcastbusiness.net [96.82.76.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2F7F520382; Thu, 22 Dec 2016 02:15:18 +0000 (UTC) From: Stefano Stabellini To: julien.grall@arm.com Date: Wed, 21 Dec 2016 18:15:12 -0800 Message-Id: <1482372913-18366-3-git-send-email-sstabellini@kernel.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1482372913-18366-1-git-send-email-sstabellini@kernel.org> References: <1482372913-18366-1-git-send-email-sstabellini@kernel.org> X-Virus-Scanned: ClamAV using ClamSMTP Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org Subject: [Xen-devel] [PATCH v2 3/4] arm, vgic_migrate_irq: take the right vgic lock X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Always take the vgic lock of the old vcpu. When more than one irq migration is requested before the first one completes, take the vgic lock of the oldest vcpu. Write the new vcpu id into the rank from vgic_migrate_irq, protected by the oldest vgic vcpu lock. Use barriers to ensure proper ordering between clearing inflight and MIGRATING and setting vcpu to GIC_INVALID_VCPU. Signed-off-by: Stefano Stabellini --- xen/arch/arm/gic.c | 5 +++++ xen/arch/arm/vgic-v2.c | 12 +++-------- xen/arch/arm/vgic-v3.c | 6 +----- xen/arch/arm/vgic.c | 50 +++++++++++++++++++++++++++++++++++++++------- xen/include/asm-arm/vgic.h | 3 ++- 5 files changed, 54 insertions(+), 22 deletions(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index 3189693..51148b4 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -512,6 +512,11 @@ static void gic_update_one_lr(struct vcpu *v, int i) struct vcpu *v_target = vgic_get_target_vcpu(v, irq); irq_set_affinity(p->desc, cpumask_of(v_target->processor)); } + /* + * Clear MIGRATING, set new affinity, then clear vcpu. This + * barrier pairs with the one in vgic_migrate_irq. + */ + smp_mb(); p->vcpu = GIC_INVALID_VCPU; } } diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c index 3dbcfe8..38b1be1 100644 --- a/xen/arch/arm/vgic-v2.c +++ b/xen/arch/arm/vgic-v2.c @@ -154,15 +154,9 @@ static void vgic_store_itargetsr(struct domain *d, struct vgic_irq_rank *rank, old_target = rank->vcpu[offset]; - /* Only migrate the vIRQ if the target vCPU has changed */ - if ( new_target != old_target ) - { - vgic_migrate_irq(d->vcpu[old_target], - d->vcpu[new_target], - virq); - } - - rank->vcpu[offset] = new_target; + vgic_migrate_irq(d->vcpu[old_target], + d->vcpu[new_target], + virq, &rank->vcpu[offset]); } } diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c index d61479d..6fb0fdd 100644 --- a/xen/arch/arm/vgic-v3.c +++ b/xen/arch/arm/vgic-v3.c @@ -150,11 +150,7 @@ static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank, if ( !new_vcpu ) return; - /* Only migrate the IRQ if the target vCPU has changed */ - if ( new_vcpu != old_vcpu ) - vgic_migrate_irq(old_vcpu, new_vcpu, virq); - - rank->vcpu[offset] = new_vcpu->vcpu_id; + vgic_migrate_irq(old_vcpu, new_vcpu, virq, &rank->vcpu[offset]); } static inline bool vgic_reg64_check_access(struct hsr_dabt dabt) diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index f2e3eda..cceac24 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -257,9 +257,8 @@ static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq) return priority; } -void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) +static void __vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) { - unsigned long flags; struct pending_irq *p = irq_to_pending(old, irq); /* nothing to do for virtual interrupts */ @@ -272,12 +271,9 @@ void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) perfc_incr(vgic_irq_migrates); - spin_lock_irqsave(&old->arch.vgic.lock, flags); - if ( list_empty(&p->inflight) ) { irq_set_affinity(p->desc, cpumask_of(new->processor)); - spin_unlock_irqrestore(&old->arch.vgic.lock, flags); return; } /* If the IRQ is still lr_pending, re-inject it to the new vcpu */ @@ -287,7 +283,6 @@ void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) list_del_init(&p->lr_queue); list_del_init(&p->inflight); irq_set_affinity(p->desc, cpumask_of(new->processor)); - spin_unlock_irqrestore(&old->arch.vgic.lock, flags); vgic_vcpu_inject_irq(new, irq); return; } @@ -296,7 +291,48 @@ void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq) if ( !list_empty(&p->inflight) ) set_bit(GIC_IRQ_GUEST_MIGRATING, &p->status); - spin_unlock_irqrestore(&old->arch.vgic.lock, flags); +} + +void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq, + uint8_t *rank_vcpu) +{ + struct pending_irq *p; + unsigned long flags; + struct vcpu *v; + uint8_t vcpu; + + /* Only migrate the IRQ if the target vCPU has changed */ + if ( new == old ) + return; + + /* + * In most cases, p->vcpu is either invalid or the same as "old". + * The only exceptions are cases where the interrupt has already + * been migrated to a different vcpu, but the irq migration is still + * in progress (GIC_IRQ_GUEST_MIGRATING has been set). If that is + * the case, then "old" points to an intermediary vcpu we don't care + * about. We want to take the lock on the older vcpu instead, + * because that is the one gic_update_one_lr holds. + * + * The vgic lock is the only lock protecting accesses to rank_vcpu + * from gic_update_one_lr. However, writes to rank_vcpu are still + * protected by the rank lock. + */ + p = irq_to_pending(old, irq); + vcpu = p->vcpu; + + /* This pairs with the barrier in gic_update_one_lr. */ + smp_mb(); + + if ( vcpu != GIC_INVALID_VCPU ) + v = old->domain->vcpu[vcpu]; + else + v = old; + + spin_lock_irqsave(&v->arch.vgic.lock, flags); + __vgic_migrate_irq(old, new, irq); + *rank_vcpu = new->vcpu_id; + spin_unlock_irqrestore(&v->arch.vgic.lock, flags); } void arch_move_irqs(struct vcpu *v) diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h index fde5b32..dce2f84 100644 --- a/xen/include/asm-arm/vgic.h +++ b/xen/include/asm-arm/vgic.h @@ -314,7 +314,8 @@ extern int vcpu_vgic_free(struct vcpu *v); extern bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, int virq, const struct sgi_target *target); -extern void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq); +extern void vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq, + uint8_t *rank_vcpu); /* Reserve a specific guest vIRQ */ extern bool vgic_reserve_virq(struct domain *d, unsigned int virq);