From patchwork Wed Sep 6 12:26:11 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 9940563 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EEE7B60350 for ; Wed, 6 Sep 2017 12:27:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C364228BC6 for ; Wed, 6 Sep 2017 12:27:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B7C2C28BC4; Wed, 6 Sep 2017 12:27:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4EE8E28BC4 for ; Wed, 6 Sep 2017 12:27:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=QTzkLi/WtWpd3Ce+/MLgfvW84q0zBQFzli2hzEb7NFY=; b=uHsZqeKcRrKRuk8hCpR0CyJNK6 GpExWPQ9U73Gqgw3qZu4/uloy1+ZEgQiO/JsjkueO2NCEdA8JePWixFTwQfG7CfaowoVS/iQBY5FX KA1bdnG6Yh3SKjk2elIcM0PlvjkLIbeWbZoMhzkf2ypgsblh8sf3O3idNKWlPqat3VSpInIgHx1cB ctuTHHsrkvITMNmV6S0qSVNdf0EDULdayhbIt31IkA7kUWv1WX9iD0Y/P/+6R4d5ZlVL1FCPekkBX LE8C9HcNGbW3qv8JdTHQYkNO6jN1+2xJjgiaNBndYK/5i27lrmaXD8H4Mje0fnqX4tgDtxa9ep/cw eBS0pZ4Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dpZQq-0000rV-23; Wed, 06 Sep 2017 12:27:48 +0000 Received: from mail-wm0-x230.google.com ([2a00:1450:400c:c09::230]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dpZPt-0008Jw-Po for linux-arm-kernel@lists.infradead.org; Wed, 06 Sep 2017 12:27:00 +0000 Received: by mail-wm0-x230.google.com with SMTP id f199so8824530wme.0 for ; Wed, 06 Sep 2017 05:26:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=T8jcgLljfKbDzc1UKHBLo/i389SZKO4SKs11j3TUEyM=; b=U38Gz+swOW8ay50ElTkUWyH3+PLL8f3utrdYLfwqH7W3tVGJFpH/Z0CjHFIBm8QiDd fzAXB9+a9arEom5QqBCsinhPS5tqkILAKmewLx+GhEIq5/d+zO6n2nOgVsjU3TfHbpZc 1LycS/LrWQErviqB833kH5Xl6SVBoy+08IeDs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=T8jcgLljfKbDzc1UKHBLo/i389SZKO4SKs11j3TUEyM=; b=i0Zs2KUCyWvD7jte4KjDA+ZkolMcGvVvqZFEkQtreVDF+DCF+KIqlr9/3uob7u7FyC ffmTAwrQXAexJJ4SzHdvIRf5TbcYf3CYv6TbEdex78PtdPFN7ieytXc1Lfxu2MnGrvMo HaWGWLuEOas7ZB7yBXFuFyNq8ubk6BjsUUcI1JQWIk6LMNGzDxZJ6ixezyVsiD04eyEE kIs1rm4yOrAdIrpr5vnFyZ7iI9Uz+X3aRLNevRgBxBehVdEhlOvHwrArfCMtH3RyI8qx aJDXbBink4ytg6nydUnkeIuD9Mqa/pO6LnkLpe/rdVuNpvL+vqZ+lO7c1+tdOdbJS2BW bujA== X-Gm-Message-State: AHPjjUgrR161uiUyLfKe9S/PHIHl3MPWPrgWm862jxndWcZnJccq8FPH ITG+nCPN9Gbd2ue4 X-Google-Smtp-Source: ADKCNb4eNrtnSy1QIQU/o8kJ2crdHbga4cwJZ3L6kslkyu9Jx/Exwg8BEdmv67KMInVhC6pc3h5B6A== X-Received: by 10.80.227.131 with SMTP id b3mr5835361edm.275.1504700787855; Wed, 06 Sep 2017 05:26:27 -0700 (PDT) Received: from localhost.localdomain (xd93ddc2d.cust.hiper.dk. [217.61.220.45]) by smtp.gmail.com with ESMTPSA id e12sm1251954edm.85.2017.09.06.05.26.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 06 Sep 2017 05:26:26 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, Eric Auger , Marc Zyngier Subject: [PATCH v3 4/5] KVM: arm/arm64: Support VGIC dist pend/active changes for mapped IRQs Date: Wed, 6 Sep 2017 14:26:11 +0200 Message-Id: <20170906122612.18050-5-cdall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170906122612.18050-1-cdall@linaro.org> References: <20170906122612.18050-1-cdall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170906_052650_114447_FE41CB79 X-CRM114-Status: GOOD ( 18.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Andre Przywara , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Christoffer Dall MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP For mapped IRQs (with the HW bit set in the LR) we have to follow some rules of the architecture. One of these rules is that VM must not be allowed to deactivate a virtual interrupt with the HW bit set unless the physical interrupt is also active. This works fine when injecting mapped interrupts, because we leave it up to the injector to either set EOImode==1 or manually set the active state of the physical interrupt. However, the guest can set virtual interrupt to be pending or active by writing to the virtual distributor, which could lead to deactivating a virtual interrupt with the HW bit set without the physical interrupt being active. We could set the physical interrupt to active whenever we are about to enter the VM with a HW interrupt either pending or active, but that would be really slow, especially on GICv2. So we take the long way around and do the hard work when needed, which is expected to be extremely rare. When the VM sets the pending state for a HW interrupt on the virtual distributor we set the active state on the physical distributor, because the virtual interrupt can become active and then the guest can deactivate it. When the VM clears the pending state we also clear it on the physical side, because the injector might otherwise raise the interrupt. Signed-off-by: Christoffer Dall --- virt/kvm/arm/vgic/vgic-mmio.c | 33 +++++++++++++++++++++++++++++++++ virt/kvm/arm/vgic/vgic.c | 7 +++++++ virt/kvm/arm/vgic/vgic.h | 1 + 3 files changed, 41 insertions(+) diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index c1e4bdd..00003ae 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -131,6 +131,9 @@ void vgic_mmio_write_spending(struct kvm_vcpu *vcpu, struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); spin_lock(&irq->irq_lock); + if (irq->hw) + vgic_irq_set_phys_active(irq, true); + irq->pending_latch = true; vgic_queue_irq_unlock(vcpu->kvm, irq); @@ -149,6 +152,20 @@ void vgic_mmio_write_cpending(struct kvm_vcpu *vcpu, struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); spin_lock(&irq->irq_lock); + /* + * We don't want the guest to effectively mask the physical + * interrupt by doing a write to SPENDR followed by a write to + * CPENDR for HW interrupts, so we clear the active state on + * the physical side here. This may lead to taking an + * additional interrupt on the host, but that should not be a + * problem as the worst that can happen is an additional vgic + * injection. We also clear the pending state to maintain + * proper semantics for edge HW interrupts. + */ + if (irq->hw) { + vgic_irq_set_phys_pending(irq, false); + vgic_irq_set_phys_active(irq, false); + } irq->pending_latch = false; @@ -214,6 +231,22 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, irq->vcpu->cpu != -1) /* VCPU thread is running */ cond_resched_lock(&irq->irq_lock); + if (irq->hw) { + /* + * We cannot support setting the physical active state for + * private interrupts from another CPU than the one running + * the VCPU which identifies which private interrupt it is + * trying to modify. + */ + if (irq->intid < VGIC_NR_PRIVATE_IRQS && + irq->target_vcpu != requester_vcpu) { + spin_unlock(&irq->irq_lock); + return; + } + + vgic_irq_set_phys_active(irq, new_active_state); + } + irq->active = new_active_state; if (new_active_state) vgic_queue_irq_unlock(vcpu->kvm, irq); diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 8072969..7aec730 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -140,6 +140,13 @@ void vgic_put_irq(struct kvm *kvm, struct vgic_irq *irq) kfree(irq); } +void vgic_irq_set_phys_pending(struct vgic_irq *irq, bool pending) +{ + WARN_ON(irq_set_irqchip_state(irq->host_irq, + IRQCHIP_STATE_PENDING, + pending)); +} + /* Get the input level of a mapped IRQ directly from the physical GIC */ bool vgic_get_phys_line_level(struct vgic_irq *irq) { diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h index 7bdcda2..498ee05 100644 --- a/virt/kvm/arm/vgic/vgic.h +++ b/virt/kvm/arm/vgic/vgic.h @@ -146,6 +146,7 @@ struct vgic_irq *vgic_get_irq(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 intid); void vgic_put_irq(struct kvm *kvm, struct vgic_irq *irq); bool vgic_get_phys_line_level(struct vgic_irq *irq); +void vgic_irq_set_phys_pending(struct vgic_irq *irq, bool pending); void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active); bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq); void vgic_kick_vcpus(struct kvm *kvm);