From patchwork Sat Feb 11 02:05:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 9567789 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 15F71601C3 for ; Sat, 11 Feb 2017 02:07:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04556285DC for ; Sat, 11 Feb 2017 02:07:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ED6BB285F2; Sat, 11 Feb 2017 02:07:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 97FB9285DC for ; Sat, 11 Feb 2017 02:07:51 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ccN47-0001Al-4g; Sat, 11 Feb 2017 02:05:31 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ccN46-0001Ae-BV for xen-devel@lists.xenproject.org; Sat, 11 Feb 2017 02:05:30 +0000 Received: from [85.158.137.68] by server-9.bemta-3.messagelabs.com id F5/55-12625-9617E985; Sat, 11 Feb 2017 02:05:29 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrMLMWRWlGSWpSXmKPExsVybKJsh25m4bw Ig0nPmS2+b5nM5MDocfjDFZYAxijWzLyk/IoE1oynW46zFszVqNg75QdbA+M8hS5GLg4hgamM Ej8m/GSFcHYwSSzcf5C9i5GTg03AUOLvk01sILaIgJLEvVWTmUBsZqD4tssnWUBsYQFbiZevZ zCC2CwCqhLdN1aDxXkF3CQO9Z1nBbElBOQkTh6bDGVnSMzrmQNle0ksunEJylaTuHpuE/MERp 4FjAyrGDWKU4vKUot0DQ30kooy0zNKchMzc4A8Y73c1OLixPTUnMSkYr3k/NxNjEDP1zMwMO5 g3NblfIhRkoNJSZT3Gee8CCG+pPyUyozE4oz4otKc1OJDjDIcHEoSvE4FQDnBotT01Iq0zBxg CMKkJTh4lER4b+cDpXmLCxJzizPTIVKnGBWlxHllQfoEQBIZpXlwbbCwv8QoKyXMy8jAwCDEU 5BalJtZgir/ilGcg1FJmPcRyHiezLwSuOmvgBYzAS12fTAXZHFJIkJKqoHRmIFlldej1R/PWr x9cnPWkR3fN3F855/spzRlhfGHkA0bF7d/VzUoOCvw/pHd+TeGGTrM3P57fyzrDA4oerUke4N U0pRcnciy6/d0vL1Sb3PvUz/VeIxbdDaj5vUj0Qv6JUoc82OmPPVoPrF+gmuwpOFy70Xnb28X TzA57qv7+ealg44F6/0eK7EUZyQaajEXFScCALhtJhx2AgAA X-Env-Sender: sstabellini@kernel.org X-Msg-Ref: server-16.tower-31.messagelabs.com!1486778727!77458307!1 X-Originating-IP: [198.145.29.136] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 62692 invoked from network); 11 Feb 2017 02:05:28 -0000 Received: from mail.kernel.org (HELO mail.kernel.org) (198.145.29.136) by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 11 Feb 2017 02:05:28 -0000 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D841520357; Sat, 11 Feb 2017 02:05:25 +0000 (UTC) Received: from sstabellini-ThinkPad-X260.hsd1.ca.comcast.net (c-50-131-44-19.hsd1.ca.comcast.net [50.131.44.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B70C0202FE; Sat, 11 Feb 2017 02:05:24 +0000 (UTC) From: Stefano Stabellini To: xen-devel@lists.xenproject.org Date: Fri, 10 Feb 2017 18:05:22 -0800 Message-Id: <1486778723-25586-1-git-send-email-sstabellini@kernel.org> X-Mailer: git-send-email 1.9.1 X-Virus-Scanned: ClamAV using ClamSMTP Cc: julien.grall@arm.com, sstabellini@kernel.org Subject: [Xen-devel] [PATCH v4 1/2] arm: read/write rank->vcpu atomically X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP We don't need a lock in vgic_get_target_vcpu anymore, solving the following lock inversion bug: the rank lock should be taken first, then the vgic lock. However, gic_update_one_lr is called with the vgic lock held, and it calls vgic_get_target_vcpu, which tries to obtain the rank lock. Coverity-ID: 1381855 Coverity-ID: 1381853 Signed-off-by: Stefano Stabellini Reviewed-by: Julien Grall --- xen/arch/arm/vgic-v2.c | 6 +++--- xen/arch/arm/vgic-v3.c | 6 +++--- xen/arch/arm/vgic.c | 27 +++++---------------------- 3 files changed, 11 insertions(+), 28 deletions(-) diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c index 3dbcfe8..b30379e 100644 --- a/xen/arch/arm/vgic-v2.c +++ b/xen/arch/arm/vgic-v2.c @@ -79,7 +79,7 @@ static uint32_t vgic_fetch_itargetsr(struct vgic_irq_rank *rank, offset &= ~(NR_TARGETS_PER_ITARGETSR - 1); for ( i = 0; i < NR_TARGETS_PER_ITARGETSR; i++, offset++ ) - reg |= (1 << rank->vcpu[offset]) << (i * NR_BITS_PER_TARGET); + reg |= (1 << read_atomic(&rank->vcpu[offset])) << (i * NR_BITS_PER_TARGET); return reg; } @@ -152,7 +152,7 @@ static void vgic_store_itargetsr(struct domain *d, struct vgic_irq_rank *rank, /* The vCPU ID always starts from 0 */ new_target--; - old_target = rank->vcpu[offset]; + old_target = read_atomic(&rank->vcpu[offset]); /* Only migrate the vIRQ if the target vCPU has changed */ if ( new_target != old_target ) @@ -162,7 +162,7 @@ static void vgic_store_itargetsr(struct domain *d, struct vgic_irq_rank *rank, virq); } - rank->vcpu[offset] = new_target; + write_atomic(&rank->vcpu[offset], new_target); } } diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c index d61479d..7dc9b6f 100644 --- a/xen/arch/arm/vgic-v3.c +++ b/xen/arch/arm/vgic-v3.c @@ -108,7 +108,7 @@ static uint64_t vgic_fetch_irouter(struct vgic_irq_rank *rank, /* Get the index in the rank */ offset &= INTERRUPT_RANK_MASK; - return vcpuid_to_vaffinity(rank->vcpu[offset]); + return vcpuid_to_vaffinity(read_atomic(&rank->vcpu[offset])); } /* @@ -136,7 +136,7 @@ static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank, offset &= virq & INTERRUPT_RANK_MASK; new_vcpu = vgic_v3_irouter_to_vcpu(d, irouter); - old_vcpu = d->vcpu[rank->vcpu[offset]]; + old_vcpu = d->vcpu[read_atomic(&rank->vcpu[offset])]; /* * From the spec (see 8.9.13 in IHI 0069A), any write with an @@ -154,7 +154,7 @@ static void vgic_store_irouter(struct domain *d, struct vgic_irq_rank *rank, if ( new_vcpu != old_vcpu ) vgic_migrate_irq(old_vcpu, new_vcpu, virq); - rank->vcpu[offset] = new_vcpu->vcpu_id; + write_atomic(&rank->vcpu[offset], new_vcpu->vcpu_id); } static inline bool vgic_reg64_check_access(struct hsr_dabt dabt) diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 364d5f0..3dd9044 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -85,7 +85,7 @@ static void vgic_rank_init(struct vgic_irq_rank *rank, uint8_t index, rank->index = index; for ( i = 0; i < NR_INTERRUPT_PER_RANK; i++ ) - rank->vcpu[i] = vcpu; + write_atomic(&rank->vcpu[i], vcpu); } int domain_vgic_register(struct domain *d, int *mmio_count) @@ -218,28 +218,11 @@ int vcpu_vgic_free(struct vcpu *v) return 0; } -/* The function should be called by rank lock taken. */ -static struct vcpu *__vgic_get_target_vcpu(struct vcpu *v, unsigned int virq) -{ - struct vgic_irq_rank *rank = vgic_rank_irq(v, virq); - - ASSERT(spin_is_locked(&rank->lock)); - - return v->domain->vcpu[rank->vcpu[virq & INTERRUPT_RANK_MASK]]; -} - -/* takes the rank lock */ struct vcpu *vgic_get_target_vcpu(struct vcpu *v, unsigned int virq) { - struct vcpu *v_target; struct vgic_irq_rank *rank = vgic_rank_irq(v, virq); - unsigned long flags; - - vgic_lock_rank(v, rank, flags); - v_target = __vgic_get_target_vcpu(v, virq); - vgic_unlock_rank(v, rank, flags); - - return v_target; + int target = read_atomic(&rank->vcpu[virq & INTERRUPT_RANK_MASK]); + return v->domain->vcpu[target]; } static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq) @@ -326,7 +309,7 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n) while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { irq = i + (32 * n); - v_target = __vgic_get_target_vcpu(v, irq); + v_target = vgic_get_target_vcpu(v, irq); p = irq_to_pending(v_target, irq); clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status); gic_remove_from_queues(v_target, irq); @@ -368,7 +351,7 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n) while ( (i = find_next_bit(&mask, 32, i)) < 32 ) { irq = i + (32 * n); - v_target = __vgic_get_target_vcpu(v, irq); + v_target = vgic_get_target_vcpu(v, irq); p = irq_to_pending(v_target, irq); set_bit(GIC_IRQ_GUEST_ENABLED, &p->status); spin_lock_irqsave(&v_target->arch.vgic.lock, flags);