From patchwork Thu Dec 7 17:06:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10100483 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 904EF602BF for ; Thu, 7 Dec 2017 17:36:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D52B28558 for ; Thu, 7 Dec 2017 17:36:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 71ECF28567; Thu, 7 Dec 2017 17:36:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8878F28558 for ; Thu, 7 Dec 2017 17:36:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=j8Ru/7g6uvOcOGiWR3S6pj9sJM8Rh69wLG1WTqik0bE=; b=Jl21ZWt3zTyDTgS5PXHwgSZcvb ERqys2pX2Zn3makWGD7BEWWLIRdA/FjVhuIvwo0KstZqgrSBWsBPPi3JncXHrDFI6WS3TpOtM2zxl Ngz3QzInrmZYknDnUdB3J3+r11YGMZrfnw17K/L/gCKge8Z6OWLTXsD4fVWaq+ReiPtgrPIDfSXDv 7eP7h8c8hShIy1P2Z1J3c5xc5m1uqJ3iWZf5CnrJDQPEOuxYxb5XuwIfobyxb2CxtWtP1XgdbIdyd IVZeq9jH11MM9N8k8b9uf5qCj6sJCnqcqwbfWk39A5kJdnHi85syIHkSY4Tm6ayyfP+RF08EnfwXc Y1XRqH7w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1eN05g-0007ZE-PN; Thu, 07 Dec 2017 17:36:08 +0000 Received: from merlin.infradead.org ([2001:8b0:10b:1231::1]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1eN04C-0004zv-9B for linux-arm-kernel@bombadil.infradead.org; Thu, 07 Dec 2017 17:34:36 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=tsfk0Hiqu6UN/pVcKv86VMlVXBrFbupPH7U7ibkG+3o=; b=MWV5L5lRZoOL2+HrcEKrRMpN7 gWRcsYQ6RVlhMi9aFyCGnAYBUlYo5fQ/7Ge1d9v6mFLv8GvkeXMFeXGojf/wgHNV8ZLJXlQ07BXjc p7rfJmyvSAjNmm9tYImS8Qq6VxkmFIiKkOcQIDli4/in/wYthMf7MnJyIF8Zszb6ILJ5CkVePi5s+ vQErtpgTwxm5stIvnt2Bu3M87aFmQP269rWiPYbEEDQ1CwqCKbR9YDfsCb2e35c1sCPAbIne0me4m BXRMBDUtO2Ye2msizsUeSLZ5nKL5u+B41X7/U3tkt3TVYLkYquRwQkbChHKbQzb7E8igqDjWrPrQT uhTy4oF7Q==; Received: from mail-wm0-x241.google.com ([2a00:1450:400c:c09::241]) by merlin.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1eMzeK-00010y-AI for linux-arm-kernel@lists.infradead.org; Thu, 07 Dec 2017 17:07:53 +0000 Received: by mail-wm0-x241.google.com with SMTP id l141so14274724wmg.1 for ; Thu, 07 Dec 2017 09:07:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tsfk0Hiqu6UN/pVcKv86VMlVXBrFbupPH7U7ibkG+3o=; b=VX4XOsO4ZM84S0yOaaoPYhPPr57MNAPyVzCB/q7TyaFoRzG/quRHpm9l48tX8CsHh+ //x6RPfku2WS+dJawnLZHl4Wyy/pJizg0bUJHiUuwD9q4tA4ZgFdvfX0yrS8PCSk32+k xAiEmu3mZmUcHDrRUccfHkzyg5g8iboZmPdW8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tsfk0Hiqu6UN/pVcKv86VMlVXBrFbupPH7U7ibkG+3o=; b=rTQMExoPKssx4iZkcMSGvLCBBweH7ha26XlkQfj0r1TS/HDttXmiV5na0tgCLpFLNE iXrknPqJuOhdNxuyYUhlMnvcU/Bq2OqsRdCqAmGLTcvK1jV6nBfcUPH9HGuxVgIi0m7n i4VEN7tdncQi3EXb25EPCg77K+s1lfi1W/SQWozD3sE7VVwmRAzq+Hj+WfibUEBtPCS8 kM5GvBQ1cRRj27FaZNCxs4762EyhoGi6PBczi6D2kJ1YWdNpC8RYp9l7tYrj71KINCxD Gw9D2hrgYOMG2SKC9S2zAY4Imj2Hyc/Y4NrYk80PWk4qzZQE/vDxRaSyjCCACfr/v15r zg0w== X-Gm-Message-State: AJaThX6iig9iZ1U9l4V39twG7xjBUBfQMY0x/AX1DrYz7dIh9IDIMw6w sc+DBlQeIecx/PdOzJ/DB69Ejg== X-Google-Smtp-Source: AGs4zMYObl+5+BnkpBqUVthhuRMoNcG52g8K86fi9c3fVJcQ/6ZTmGHl/uIJEVaYUoEj0nfqEvQ3xA== X-Received: by 10.80.151.178 with SMTP id e47mr45950800edb.196.1512666452313; Thu, 07 Dec 2017 09:07:32 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id a16sm2868270edd.19.2017.12.07.09.07.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 07 Dec 2017 09:07:31 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 32/36] KVM: arm/arm64: Handle VGICv2 save/restore from the main VGIC code Date: Thu, 7 Dec 2017 18:06:26 +0100 Message-Id: <20171207170630.592-33-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171207170630.592-1-christoffer.dall@linaro.org> References: <20171207170630.592-1-christoffer.dall@linaro.org> X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Andrew Jones , Christoffer Dall , Shih-Wei Li , kvm@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP We can program the GICv2 hypervisor control interface logic directly from the core vgic code and can instead do the save/restore directly from the flush/sync functions, which can lead to a number of future optimizations. Signed-off-by: Christoffer Dall --- Notes: Changes since v1: - Removed unnecessary kvm_hyp.h include - Adapted the patch based on having gotten rid of storing the elrsr prior to this patch. - No longer change the interrupt handling of the maintenance interrupt handler. That seems to have been a leftover from an earlier version of the timer patches where we were syncing the vgic state after having enabled interrupts, leading to the maintenance interrupt firing. It may be possible to move the vgic sync function out to an interrupts enabled section later on, which would require re-introducing logic to disable the VGIC maintenance interrupt in the maintenance interrupt handler, but we leave this for future work as the immediate benefit is not clear. arch/arm/kvm/hyp/switch.c | 4 --- arch/arm64/include/asm/kvm_hyp.h | 2 -- arch/arm64/kvm/hyp/switch.c | 4 --- virt/kvm/arm/hyp/vgic-v2-sr.c | 65 ---------------------------------------- virt/kvm/arm/vgic/vgic-v2.c | 63 ++++++++++++++++++++++++++++++++++++++ virt/kvm/arm/vgic/vgic.c | 19 +++++++++++- virt/kvm/arm/vgic/vgic.h | 3 ++ 7 files changed, 84 insertions(+), 76 deletions(-) diff --git a/arch/arm/kvm/hyp/switch.c b/arch/arm/kvm/hyp/switch.c index 7b2bd25e3b10..214187446e63 100644 --- a/arch/arm/kvm/hyp/switch.c +++ b/arch/arm/kvm/hyp/switch.c @@ -91,16 +91,12 @@ static void __hyp_text __vgic_save_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) __vgic_v3_save_state(vcpu); - else - __vgic_v2_save_state(vcpu); } static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) __vgic_v3_restore_state(vcpu); - else - __vgic_v2_restore_state(vcpu); } static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 28d5f3cb4001..bd3fe6446728 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -121,8 +121,6 @@ typeof(orig) * __hyp_text fname(void) \ return val; \ } -void __vgic_v2_save_state(struct kvm_vcpu *vcpu); -void __vgic_v2_restore_state(struct kvm_vcpu *vcpu); int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu); void __vgic_v3_save_state(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index f9f104bfc27b..a7de1436a0e6 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -187,16 +187,12 @@ static void __hyp_text __vgic_save_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) __vgic_v3_save_state(vcpu); - else - __vgic_v2_save_state(vcpu); } static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) __vgic_v3_restore_state(vcpu); - else - __vgic_v2_restore_state(vcpu); } static bool __hyp_text __true_value(void) diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c index c536e3d87942..b433257f4348 100644 --- a/virt/kvm/arm/hyp/vgic-v2-sr.c +++ b/virt/kvm/arm/hyp/vgic-v2-sr.c @@ -22,71 +22,6 @@ #include #include -static void __hyp_text save_lrs(struct kvm_vcpu *vcpu, void __iomem *base) -{ - struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; - u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; - u64 elrsr; - int i; - - elrsr = readl_relaxed(base + GICH_ELRSR0); - if (unlikely(used_lrs > 32)) - elrsr |= ((u64)readl_relaxed(base + GICH_ELRSR1)) << 32; - - for (i = 0; i < used_lrs; i++) { - if (elrsr & (1UL << i)) - cpu_if->vgic_lr[i] &= ~GICH_LR_STATE; - else - cpu_if->vgic_lr[i] = readl_relaxed(base + GICH_LR0 + (i * 4)); - - writel_relaxed(0, base + GICH_LR0 + (i * 4)); - } -} - -/* vcpu is already in the HYP VA space */ -void __hyp_text __vgic_v2_save_state(struct kvm_vcpu *vcpu) -{ - struct kvm *kvm = kern_hyp_va(vcpu->kvm); - struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; - struct vgic_dist *vgic = &kvm->arch.vgic; - void __iomem *base = kern_hyp_va(vgic->vctrl_base); - u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; - - if (!base) - return; - - if (used_lrs) { - cpu_if->vgic_apr = readl_relaxed(base + GICH_APR); - save_lrs(vcpu, base); - writel_relaxed(0, base + GICH_HCR); - } else { - cpu_if->vgic_apr = 0; - } -} - -/* vcpu is already in the HYP VA space */ -void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu) -{ - struct kvm *kvm = kern_hyp_va(vcpu->kvm); - struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; - struct vgic_dist *vgic = &kvm->arch.vgic; - void __iomem *base = kern_hyp_va(vgic->vctrl_base); - int i; - u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; - - if (!base) - return; - - if (used_lrs) { - writel_relaxed(cpu_if->vgic_hcr, base + GICH_HCR); - writel_relaxed(cpu_if->vgic_apr, base + GICH_APR); - for (i = 0; i < used_lrs; i++) { - writel_relaxed(cpu_if->vgic_lr[i], - base + GICH_LR0 + (i * 4)); - } - } -} - #ifdef CONFIG_ARM64 /* * __vgic_v2_perform_cpuif_access -- perform a GICV access on behalf of the diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c index bb305d49cfdd..1e5f3eb6973d 100644 --- a/virt/kvm/arm/vgic/vgic-v2.c +++ b/virt/kvm/arm/vgic/vgic-v2.c @@ -421,6 +421,69 @@ int vgic_v2_probe(const struct gic_kvm_info *info) return ret; } +static void save_lrs(struct kvm_vcpu *vcpu, void __iomem *base) +{ + struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; + u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; + u64 elrsr; + int i; + + elrsr = readl_relaxed(base + GICH_ELRSR0); + if (unlikely(used_lrs > 32)) + elrsr |= ((u64)readl_relaxed(base + GICH_ELRSR1)) << 32; + + for (i = 0; i < used_lrs; i++) { + if (elrsr & (1UL << i)) + cpu_if->vgic_lr[i] &= ~GICH_LR_STATE; + else + cpu_if->vgic_lr[i] = readl_relaxed(base + GICH_LR0 + (i * 4)); + + writel_relaxed(0, base + GICH_LR0 + (i * 4)); + } +} + +void vgic_v2_save_state(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = vcpu->kvm; + struct vgic_dist *vgic = &kvm->arch.vgic; + struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; + void __iomem *base = vgic->vctrl_base; + u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; + + if (!base) + return; + + if (used_lrs) { + cpu_if->vgic_apr = readl_relaxed(base + GICH_APR); + save_lrs(vcpu, base); + writel_relaxed(0, base + GICH_HCR); + } else { + cpu_if->vgic_apr = 0; + } +} + +void vgic_v2_restore_state(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = vcpu->kvm; + struct vgic_dist *vgic = &kvm->arch.vgic; + struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; + void __iomem *base = vgic->vctrl_base; + u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; + int i; + + if (!base) + return; + + if (used_lrs) { + writel_relaxed(cpu_if->vgic_hcr, base + GICH_HCR); + writel_relaxed(cpu_if->vgic_apr, base + GICH_APR); + for (i = 0; i < used_lrs; i++) { + writel_relaxed(cpu_if->vgic_lr[i], + base + GICH_LR0 + (i * 4)); + } + } +} + void vgic_v2_load(struct kvm_vcpu *vcpu) { struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2; diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index f6299eb1998f..5bf0804e79b4 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -750,11 +750,19 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu) vgic_clear_lr(vcpu, count); } +static inline void vgic_save_state(struct kvm_vcpu *vcpu) +{ + if (!static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) + vgic_v2_save_state(vcpu); +} + /* Sync back the hardware VGIC state into our emulation after a guest's run. */ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; + vgic_save_state(vcpu); + WARN_ON(vgic_v4_sync_hwstate(vcpu)); /* An empty ap_list_head implies used_lrs == 0 */ @@ -766,6 +774,12 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) vgic_prune_ap_list(vcpu); } +static inline void vgic_restore_state(struct kvm_vcpu *vcpu) +{ + if (!static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) + vgic_v2_restore_state(vcpu); +} + /* Flush our emulation state into the GIC hardware before entering the guest. */ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu) { @@ -781,13 +795,16 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu) * this. */ if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head)) - return; + goto out; DEBUG_SPINLOCK_BUG_ON(!irqs_disabled()); spin_lock(&vcpu->arch.vgic_cpu.ap_list_lock); vgic_flush_lr_state(vcpu); spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock); + +out: + vgic_restore_state(vcpu); } void kvm_vgic_load(struct kvm_vcpu *vcpu) diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h index 12c37b89f7a3..89b9547fba27 100644 --- a/virt/kvm/arm/vgic/vgic.h +++ b/virt/kvm/arm/vgic/vgic.h @@ -176,6 +176,9 @@ void vgic_v2_init_lrs(void); void vgic_v2_load(struct kvm_vcpu *vcpu); void vgic_v2_put(struct kvm_vcpu *vcpu); +void vgic_v2_save_state(struct kvm_vcpu *vcpu); +void vgic_v2_restore_state(struct kvm_vcpu *vcpu); + static inline void vgic_get_irq_kref(struct vgic_irq *irq) { if (irq->intid < VGIC_MIN_LPI)