From patchwork Tue Mar 21 21:10:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 9637607 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1062260327 for ; Tue, 21 Mar 2017 21:12:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 025A925E13 for ; Tue, 21 Mar 2017 21:12:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E9995281F9; Tue, 21 Mar 2017 21:12:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8FA4825E13 for ; Tue, 21 Mar 2017 21:12:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934036AbdCUVL7 (ORCPT ); Tue, 21 Mar 2017 17:11:59 -0400 Received: from mail-wm0-f53.google.com ([74.125.82.53]:36552 "EHLO mail-wm0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933918AbdCUVL5 (ORCPT ); Tue, 21 Mar 2017 17:11:57 -0400 Received: by mail-wm0-f53.google.com with SMTP id n11so21249209wma.1 for ; Tue, 21 Mar 2017 14:11:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cGqyI5VAitBGDXAKPbyajRdK2qQ8OKzzrvPQeATk+9k=; b=W21cUPfuS1OxWSzBq+gytUtVzxXkcY9TXWmGvzqNHSPRrGJspFX88iaAE0bi19onz7 vcK088McB6hj/Nvna1YoffNG618RUljzS3Z2KTE5k/Xgql9/HbxGBbn90S5OLOeDEu8e 594kpcFkkU7rWW8g/NJiS7vOP01sl+Zn2bBOM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cGqyI5VAitBGDXAKPbyajRdK2qQ8OKzzrvPQeATk+9k=; b=jsNWOAn2m5rdDa9sr3KO6vN5D/IiPZlGlU4u5TJNBIslK38Bbo6U0bXz+02LzAXPM+ Fnk4PANmHa3jeO9647metuSjn5r+eyId5u0ouAWsJ9PsW0IqClWvOxYADf1kKoqpmXGZ 82Kmjcd2qp8hKHmGhU1WZ2FLNBkpXuzRP/YNu9/80BLlKZx0OkB/Fq17J6kZPbckeMdL Dl68PygxWat8UFzzbT1JwjNyC8P+0klAVGh7PEQrVTrZJtK+x8lP5g5n3KhK+l2rdqDO f4sf1oET4z5InyTgIIgpoN5j2M1HcZrslwKJ4TicwifH/Sfq/fSgsD27jaVxBsf0WIaU ++0w== X-Gm-Message-State: AFeK/H3Nem3HkOuybQbSKvARXwIdqUhp27u0lYhlniEZizovBU/8in9Fvpl3RHdcvSBtbeMZ X-Received: by 10.28.153.149 with SMTP id b143mr4771050wme.87.1490130664411; Tue, 21 Mar 2017 14:11:04 -0700 (PDT) Received: from localhost.localdomain (xd93ddc2d.cust.hiper.dk. [217.61.220.45]) by smtp.gmail.com with ESMTPSA id w130sm18953459wmg.0.2017.03.21.14.11.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 21 Mar 2017 14:11:03 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, Marc Zyngier , Andre Przywara , Eric Auger , Christoffer Dall Subject: [PATCH v2 04/10] KVM: arm/arm64: vgic: Only set underflow when actually out of LRs Date: Tue, 21 Mar 2017 22:10:53 +0100 Message-Id: <20170321211059.8719-5-cdall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170321211059.8719-1-cdall@linaro.org> References: <20170321211059.8719-1-cdall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We currently assume that all the interrupts in our AP list will be queued to LRs, but that's not necessarily the case, because some of them could have been migrated away to different VCPUs and only the VCPU thread itself can remove interrupts from its AP list. Therefore, slightly change the logic to only setting the underflow interrupt when we actually run out of LRs. As it turns out, this allows us to further simplify the handling in vgic_sync_hwstate in later patches. Signed-off-by: Christoffer Dall Acked-by: Marc Zyngier --- Changes since v1: - New patch virt/kvm/arm/vgic/vgic.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 1043291..442f7df 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -601,10 +601,8 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu) DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock)); - if (compute_ap_list_depth(vcpu) > kvm_vgic_global_state.nr_lr) { - vgic_set_underflow(vcpu); + if (compute_ap_list_depth(vcpu) > kvm_vgic_global_state.nr_lr) vgic_sort_ap_list(vcpu); - } list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) { spin_lock(&irq->irq_lock); @@ -623,8 +621,12 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu) next: spin_unlock(&irq->irq_lock); - if (count == kvm_vgic_global_state.nr_lr) + if (count == kvm_vgic_global_state.nr_lr) { + if (!list_is_last(&irq->ap_list, + &vgic_cpu->ap_list_head)) + vgic_set_underflow(vcpu); break; + } } vcpu->arch.vgic_cpu.used_lrs = count;