From patchwork Sun Nov 17 04:30:18 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 3193621 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id EA8609F3A0 for ; Sun, 17 Nov 2013 04:57:14 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CED762038B for ; Sun, 17 Nov 2013 04:57:13 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BD6F62037E for ; Sun, 17 Nov 2013 04:57:12 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Vhu18-0001kq-Sj; Sun, 17 Nov 2013 04:31:30 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Vhu0Z-0004Mo-3S; Sun, 17 Nov 2013 04:30:51 +0000 Received: from mail-pa0-f53.google.com ([209.85.220.53]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VhtzX-0004FH-Eh for linux-arm-kernel@lists.infradead.org; Sun, 17 Nov 2013 04:29:51 +0000 Received: by mail-pa0-f53.google.com with SMTP id hz1so2116352pad.40 for ; Sat, 16 Nov 2013 20:29:26 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=T9EsFpRUX/+XbGmyQI8qvfgXp/IXNxsRvNn09Ea1trE=; b=O0ov0NS/DYc/BATkIqPDCYg4TuMGx+EQRC66YuXvA9JZ0wdHcbyGixSqN/SRSVpe8o gJ12x7rIO+CwuY0tnC5HN2ZqxYlbhoSqMV8RCbGqSUnOCFCZJbdVrIG2hgJRdTti9b2c y1aaG4rEgrqTU6U841X1Loj6iabq3cr9oCDYQ8mt8QMeabmj79n1XIvDB/sLlEAqCNQg 9ioN3F3uW2FnZEhp8r4AfMGs1AO8PP9ILpvTDML9xV2n3uf8iOuJw7KQHEEQ6zU01dLG P5u4Wvo3xbhzLoZPxT+Kj3Xyb51LrAbYaWO0YXy7XtnwdPP6IrspypbJiqlWjc343lj1 xlSA== X-Gm-Message-State: ALoCoQmdijUMz0+lCQnoXpqqd+ryXjus0RY8nWeBb6sowgDm6TVIN0FwK6V8C7P+kqBJAVMFjWmO X-Received: by 10.68.233.201 with SMTP id ty9mr6899417pbc.72.1384662566040; Sat, 16 Nov 2013 20:29:26 -0800 (PST) Received: from localhost.localdomain (c-67-169-181-221.hsd1.ca.comcast.net. [67.169.181.221]) by mx.google.com with ESMTPSA id ho3sm14498530pbb.23.2013.11.16.20.29.24 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 16 Nov 2013 20:29:25 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 7/9] KVM: arm-vgic: Support unqueueing of LRs to the dist Date: Sat, 16 Nov 2013 20:30:18 -0800 Message-Id: <1384662620-13795-8-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1384662620-13795-1-git-send-email-christoffer.dall@linaro.org> References: <1384662620-13795-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131116_232947_659129_2C771A59 X-CRM114-Status: GOOD ( 16.83 ) X-Spam-Score: -2.6 (--) Cc: Christoffer Dall , patches@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To properly access the VGIC state from user space it is very unpractical to have to loop through all the LRs in all register access functions. Instead, support moving all pending state from LRs to the distributor, but leave active state LRs alone. Note that to accurately present the active and pending state to VCPUs reading these distributor registers from a live VM, we would have to stop all other VPUs than the calling VCPU and ask each CPU to unqueue their LR state onto the distributor and add fields to track active state on the distributor side as well. We don't have any users of such functionality yet and there are other inaccuracies of the GIC emulation, so don't provide accurate synchronized access to this state just yet. However, when the time comes, having this function should help. Signed-off-by: Christoffer Dall Changelog[3]: - New patch in series --- virt/kvm/arm/vgic.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 75 insertions(+), 5 deletions(-) diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index ecf6dcf..44c669b 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -589,6 +589,72 @@ static bool handle_mmio_sgi_reg(struct kvm_vcpu *vcpu, return false; } +#define LR_CPUID(lr) \ + (((lr) & GICH_LR_PHYSID_CPUID) >> GICH_LR_PHYSID_CPUID_SHIFT) +#define LR_IRQID(lr) \ + ((lr) & GICH_LR_VIRTUALID) + +static void vgic_retire_lr(int lr_nr, int irq, struct vgic_cpu *vgic_cpu) +{ + clear_bit(lr_nr, vgic_cpu->lr_used); + vgic_cpu->vgic_lr[lr_nr] &= ~GICH_LR_STATE; + vgic_cpu->vgic_irq_lr_map[irq] = LR_EMPTY; +} + +/** + * vgic_unqueue_irqs - move pending IRQs from LRs to the distributor + * @vgic_cpu: Pointer to the vgic_cpu struct holding the LRs + * + * Move any pending IRQs that have already been assigned to LRs back to the + * emulated distributor state so that the complete emulated state can be read + * from the main emulation structures without investigating the LRs. + * + * Note that IRQs in the active state in the LRs get their pending state moved + * to the distributor but the active state stays in the LRs, because we don't + * track the active state on the distributor side. + */ +static void vgic_unqueue_irqs(struct kvm_vcpu *vcpu) +{ + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + int vcpu_id = vcpu->vcpu_id; + int i, irq, source_cpu; + u32 *lr; + + for_each_set_bit(i, vgic_cpu->lr_used, vgic_cpu->nr_lr) { + lr = &vgic_cpu->vgic_lr[i]; + irq = LR_IRQID(*lr); + source_cpu = LR_CPUID(*lr); + + /* + * If the LR holds only an active interrupt (not pending) then + * just leave it alone. + */ + if (!__test_and_clear_bit(__ffs(GICH_LR_PENDING_BIT), + (unsigned long *)lr)) + continue; + + /* + * If the interrupt was only pending (not "active" or "pending + * and active") then we have effectively cleared the LR and it + * should be marked as free for other use. + */ + if (!(*lr & GICH_LR_STATE)) + vgic_retire_lr(i, irq, vgic_cpu); + + /* + * Finally, reestablish the pending state on the distributor + * and the CPU interface. It may have already been pending, + * but that is fine, then we are only setting a few bits that + * were already set. + */ + vgic_dist_irq_set(vcpu, irq); + if (irq < VGIC_NR_SGIS) + dist->irq_sgi_sources[vcpu_id][irq] |= 1 << source_cpu; + vgic_update_state(vcpu->kvm); + } +} + static bool handle_mmio_sgi_clear(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio, phys_addr_t offset) @@ -848,8 +914,6 @@ static void vgic_update_state(struct kvm *kvm) } } -#define LR_CPUID(lr) \ - (((lr) & GICH_LR_PHYSID_CPUID) >> GICH_LR_PHYSID_CPUID_SHIFT) #define MK_LR_PEND(src, irq) \ (GICH_LR_PENDING_BIT | ((src) << GICH_LR_PHYSID_CPUID_SHIFT) | (irq)) @@ -871,9 +935,7 @@ static void vgic_retire_disabled_irqs(struct kvm_vcpu *vcpu) int irq = vgic_cpu->vgic_lr[lr] & GICH_LR_VIRTUALID; if (!vgic_irq_is_enabled(vcpu, irq)) { - vgic_cpu->vgic_irq_lr_map[irq] = LR_EMPTY; - clear_bit(lr, vgic_cpu->lr_used); - vgic_cpu->vgic_lr[lr] &= ~GICH_LR_STATE; + vgic_retire_lr(lr, irq, vgic_cpu); if (vgic_irq_is_active(vcpu, irq)) vgic_irq_clear_active(vcpu, irq); } @@ -1664,6 +1726,14 @@ static int vgic_attr_regs_access(struct kvm_device *dev, } } + /* + * Move all pending IRQs from the LRs on all VCPUs so the pending + * state can be properly represented in the register state accessible + * through this API. + */ + kvm_for_each_vcpu(c, tmp_vcpu, dev->kvm) + vgic_unqueue_irqs(tmp_vcpu); + offset -= r->base; r->handle_mmio(vcpu, &mmio, offset); spin_unlock(&vgic->lock);