From patchwork Thu Dec 12 19:55:48 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 3335041 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id EF4039F243 for ; Thu, 12 Dec 2013 23:34:58 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 21107207C3 for ; Thu, 12 Dec 2013 23:34:58 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 306CD207BF for ; Thu, 12 Dec 2013 23:34:56 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VrCNL-00043w-JV; Thu, 12 Dec 2013 19:56:49 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VrCMW-0008FQ-IH; Thu, 12 Dec 2013 19:55:56 +0000 Received: from mail-pd0-f178.google.com ([209.85.192.178]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VrCLh-000867-FB for linux-arm-kernel@lists.infradead.org; Thu, 12 Dec 2013 19:55:07 +0000 Received: by mail-pd0-f178.google.com with SMTP id y10so1077056pdj.9 for ; Thu, 12 Dec 2013 11:54:44 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IiB9b3k//kAOaTuGfxAhVmNIVPL4iwsAWdD7A+kUIlg=; b=FmC/MlY0u2qDne0KKFIiLiIPXIRrCwYf4vNpUx6HIQt5lk1RxsJ/s1dspzQQa0krUP Q2/xJjkjn9sG0gaEuflEQFFIpU+RXHtlXvOVN+pusvs3/pu+lvSGu5WeA6bXA50pEI9f XomXossnwudCGT/32vkpRQmbOICE5VRdelS/WWkVC5iKcPH/7jJCRPhLPswcJi0bt7c2 YUxcpeTJzsJOMhYjBG4ZP+potSXyxzDQRQ0Kp1MdWacKdEU93drZ2YvcbanoTdtAR/WZ 8C2EfEuCLT5aDa0pgGmGBEEO0e5eUoKIcZDBEbTnPMxmR8JqboR6VpO4GSqA5KQuKPiz +e0g== X-Gm-Message-State: ALoCoQnKMWBKF71odrkUtqcAo8vwld0maSbW1aM8vwGgSOZHbRzDpS4x3lVdIR0iECELzezA6jlY X-Received: by 10.68.172.65 with SMTP id ba1mr15417246pbc.18.1386878083931; Thu, 12 Dec 2013 11:54:43 -0800 (PST) Received: from localhost.localdomain (c-67-169-181-221.hsd1.ca.comcast.net. [67.169.181.221]) by mx.google.com with ESMTPSA id ql10sm4014884pbc.44.2013.12.12.11.54.42 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 12 Dec 2013 11:54:42 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 09/10] KVM: arm-vgic: Add GICD_SPENDSGIR and GICD_CPENDSGIR handlers Date: Thu, 12 Dec 2013 11:55:48 -0800 Message-Id: <1386878149-13397-10-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 1.8.4.3 In-Reply-To: <1386878149-13397-1-git-send-email-christoffer.dall@linaro.org> References: <1386878149-13397-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131212_145505_662190_0E7BD97D X-CRM114-Status: GOOD ( 16.16 ) X-Spam-Score: 0.6 (/) Cc: linaro-kernel@lists.linaro.org, Christoffer Dall , patches@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, SUSPICIOUS_RECIPS, UNPARSEABLE_RELAY autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Handle MMIO accesses to the two registers which should support both the case where the VMs want to read/write either of these registers and the case where user space reads/writes these registers to do save/restore of the VGIC state. Note that the added complexity compared to simple set/clear enable registers stems from the bookkeping of source cpu ids. It may be possible to change the underlying data structure to simplify the complexity, but since this is not in the critical path at all, this will do. Also note that reading this register from a live guest will not be accurate compared to on hardware, because some state may be living on the CPU LRs and the only way to give a consistent read would be to force stop all the VCPUs and request them to unqueu the LR state onto the distributor. Until we have an actual user of live reading this register, we can live with the difference. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall --- Changelog[v3]: - Renamed read/write SGI set/clear functions - Rely on unqueuing of interrupts from LRs instead of reading LRs directly - Deduplicate code Changelog[v2]: - Use struct kvm_exit_mmio accessors for ->data field. virt/kvm/arm/vgic.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 66 insertions(+), 4 deletions(-) diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index 8067e76..752f76c 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -661,18 +661,80 @@ static void vgic_unqueue_irqs(struct kvm_vcpu *vcpu) } } -static bool handle_mmio_sgi_clear(struct kvm_vcpu *vcpu, - struct kvm_exit_mmio *mmio, - phys_addr_t offset) +/* Handle reads of GICD_CPENDSGIRn and GICD_SPENDSGIRn */ +static bool read_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu, + struct kvm_exit_mmio *mmio, + phys_addr_t offset) { + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + int sgi; + int min_sgi = (offset & ~0x3) * 4; + int max_sgi = min_sgi + 3; + int vcpu_id = vcpu->vcpu_id; + u32 reg = 0; + + /* Copy source SGIs from distributor side */ + for (sgi = min_sgi; sgi <= max_sgi; sgi++) { + int shift = 8 * (sgi - min_sgi); + reg |= (u32)dist->irq_sgi_sources[vcpu_id][sgi] << shift; + } + + mmio_data_write(mmio, ~0, reg); return false; } +static bool write_set_clear_sgi_pend_reg(struct kvm_vcpu *vcpu, + struct kvm_exit_mmio *mmio, + phys_addr_t offset, bool set) +{ + struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + int sgi; + int min_sgi = (offset & ~0x3) * 4; + int max_sgi = min_sgi + 3; + int vcpu_id = vcpu->vcpu_id; + u32 reg; + bool updated = false; + + reg = mmio_data_read(mmio, ~0); + + /* Clear pending SGIs on the distributor */ + for (sgi = min_sgi; sgi <= max_sgi; sgi++) { + u8 mask = reg >> (8 * (sgi - min_sgi)); + if (set) { + if ((dist->irq_sgi_sources[vcpu_id][sgi] & mask) != mask) + updated = true; + dist->irq_sgi_sources[vcpu_id][sgi] |= mask; + } else { + if (dist->irq_sgi_sources[vcpu_id][sgi] & mask) + updated = true; + dist->irq_sgi_sources[vcpu_id][sgi] &= ~mask; + } + } + + if (updated) + vgic_update_state(vcpu->kvm); + + return updated; +} + static bool handle_mmio_sgi_set(struct kvm_vcpu *vcpu, struct kvm_exit_mmio *mmio, phys_addr_t offset) { - return false; + if (!mmio->is_write) + return read_set_clear_sgi_pend_reg(vcpu, mmio, offset); + else + return write_set_clear_sgi_pend_reg(vcpu, mmio, offset, true); +} + +static bool handle_mmio_sgi_clear(struct kvm_vcpu *vcpu, + struct kvm_exit_mmio *mmio, + phys_addr_t offset) +{ + if (!mmio->is_write) + return read_set_clear_sgi_pend_reg(vcpu, mmio, offset); + else + return write_set_clear_sgi_pend_reg(vcpu, mmio, offset, false); } /*