From patchwork Sun Apr 23 17:08:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 9694937 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4438860245 for ; Sun, 23 Apr 2017 17:10:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2C9A026530 for ; Sun, 23 Apr 2017 17:10:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 217F6265B9; Sun, 23 Apr 2017 17:10:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB41E26530 for ; Sun, 23 Apr 2017 17:10:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1162536AbdDWRKz (ORCPT ); Sun, 23 Apr 2017 13:10:55 -0400 Received: from mail-wm0-f54.google.com ([74.125.82.54]:35063 "EHLO mail-wm0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1162530AbdDWRJ6 (ORCPT ); Sun, 23 Apr 2017 13:09:58 -0400 Received: by mail-wm0-f54.google.com with SMTP id w64so46373657wma.0 for ; Sun, 23 Apr 2017 10:09:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5MPNSO2UMVhthGbqjkXGhDLiz78n7zWElftZA4KmI9A=; b=cfdflEEntFKFsvS6UnrT0rR34Lyk3WsD/iUk1JJJUdrIgp/K5lQzJVM33qY9NSwBX3 cjqLDtra5tWT6i3C//fA98KimtX/Qc7sK781jzlHf6/Pvj8z2G/IA8XteJmY89sut9jl Su0+lGw95SCP0gnm0Ps4Nq7UXTJX7GIPNITlY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5MPNSO2UMVhthGbqjkXGhDLiz78n7zWElftZA4KmI9A=; b=XUphdjPQrd0QpbnR5dHXo5G+iKlo2j03th7DfL1TKal4tqjM13cjg9gbDo20FTlCVv dpTvw9F0sjFTLuJkOoODHNjPZmz3bVVYw7qHd+0yZmuydEzwh/UsYtVVC6265svcJ3x2 0xXQRW9/gROyITWpUx6N9wTG57ffDthjoRTdxn5hwKxnEVgY9YpTgsXfF7yjqJbFRY+8 kOX7+1PhLMKJvZpCzJ5X0UdMl6IK3DgwYQzXJsmSisn5vJrLt0OK4KfU6sRgwqqckTkI SZmTU3RXKCC/3mxy6FxAFUvhqIKn0F+JR/BACaDyVH2tU0Rukr4C+19aTlupMRnvyNll I68A== X-Gm-Message-State: AN3rC/47rCUgUHg0EKM48TrbeoHD3K1Cke+Nj7OtfJnmfuXKTNmiNmme co8zggko4e5VwDES X-Received: by 10.80.174.99 with SMTP id c90mr133174edd.136.1492967392189; Sun, 23 Apr 2017 10:09:52 -0700 (PDT) Received: from localhost.localdomain (xd93ddc2d.cust.hiper.dk. [217.61.220.45]) by smtp.gmail.com with ESMTPSA id 58sm2803521edz.2.2017.04.23.10.09.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 23 Apr 2017 10:09:51 -0700 (PDT) From: Christoffer Dall To: Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= Cc: Marc Zyngier , kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Christoffer Dall Subject: [PULL 20/79] KVM: arm/arm64: vgic: Only set underflow when actually out of LRs Date: Sun, 23 Apr 2017 19:08:30 +0200 Message-Id: <20170423170929.27334-21-cdall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170423170929.27334-1-cdall@linaro.org> References: <20170423170929.27334-1-cdall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We currently assume that all the interrupts in our AP list will be queued to LRs, but that's not necessarily the case, because some of them could have been migrated away to different VCPUs and only the VCPU thread itself can remove interrupts from its AP list. Therefore, slightly change the logic to only setting the underflow interrupt when we actually run out of LRs. As it turns out, this allows us to further simplify the handling in vgic_sync_hwstate in later patches. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall --- virt/kvm/arm/vgic/vgic.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 1043291..442f7df 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -601,10 +601,8 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu) DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock)); - if (compute_ap_list_depth(vcpu) > kvm_vgic_global_state.nr_lr) { - vgic_set_underflow(vcpu); + if (compute_ap_list_depth(vcpu) > kvm_vgic_global_state.nr_lr) vgic_sort_ap_list(vcpu); - } list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) { spin_lock(&irq->irq_lock); @@ -623,8 +621,12 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu) next: spin_unlock(&irq->irq_lock); - if (count == kvm_vgic_global_state.nr_lr) + if (count == kvm_vgic_global_state.nr_lr) { + if (!list_is_last(&irq->ap_list, + &vgic_cpu->ap_list_head)) + vgic_set_underflow(vcpu); break; + } } vcpu->arch.vgic_cpu.used_lrs = count;