From patchwork Tue Jan 31 17:16:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Punit Agrawal X-Patchwork-Id: 9547823 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1DADC6016C for ; Tue, 31 Jan 2017 17:19:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0F86C283E1 for ; Tue, 31 Jan 2017 17:19:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0431A28408; Tue, 31 Jan 2017 17:19:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00 autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 763C0283E1 for ; Tue, 31 Jan 2017 17:19:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cYc57-00040V-O8; Tue, 31 Jan 2017 17:19:01 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cYc3v-0002WM-Qr for linux-arm-kernel@lists.infradead.org; Tue, 31 Jan 2017 17:17:49 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6CFAC15B2; Tue, 31 Jan 2017 09:17:29 -0800 (PST) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.195.25]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3F5943F24D; Tue, 31 Jan 2017 09:17:29 -0800 (PST) From: Punit Agrawal To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH v4 4/9] arm64: KVM: Handle trappable TLB instructions Date: Tue, 31 Jan 2017 17:16:25 +0000 Message-Id: <20170131171630.26898-5-punit.agrawal@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170131171630.26898-1-punit.agrawal@arm.com> References: <20170131171630.26898-1-punit.agrawal@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170131_091747_898975_25B47E33 X-CRM114-Status: GOOD ( 13.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , kvm@vger.kernel.org, Marc Zyngier , Punit Agrawal , Will Deacon , linux-kernel@vger.kernel.org, Christoffer Dall MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP The ARMv8 architecture allows trapping of TLB maintenane instructions from EL0/EL1 to higher exception levels. On encountering a trappable TLB instruction in a guest, an exception is taken to EL2. Add support to handle emulating the TLB instructions. Also track the number of emulated operations. Signed-off-by: Punit Agrawal Cc: Christoffer Dall Cc: Marc Zyngier --- arch/arm64/include/asm/kvm_asm.h | 2 + arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/hyp/tlb.c | 75 +++++++++++++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 83 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 161 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index ec3553eb9349..8b695785d454 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -55,6 +55,8 @@ extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); +extern void __kvm_emulate_tlb_invalidate(struct kvm *kvm, u32 opcode, + u64 regval); extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e5050388e062..1e83b707f14c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -307,6 +307,7 @@ struct kvm_vcpu_stat { u64 mmio_exit_user; u64 mmio_exit_kernel; u64 exits; + u64 tlb_invalidate; }; int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init); diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index b2cfbedea582..c14237858d75 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -86,3 +86,78 @@ void __hyp_text __kvm_flush_vm_context(void) __tlbi(alle1is); __flush_icache_all(); /* contains a dsb(ish) */ } + +/* Intentionally empty functions */ +static void __hyp_text __switch_to_hyp_role_nvhe(void) { } +static void __hyp_text __switch_to_host_role_nvhe(void) { } + +static void __hyp_text __switch_to_hyp_role_vhe(void) +{ + u64 hcr = read_sysreg(hcr_el2); + + /* + * When VHE is enabled and HCR_EL2.TGE=1, EL1&0 TLB operations + * apply to EL2&0 translation regime. As we prepare to emulate + * guest TLB operation clear HCR_TGE to target TLB operations + * to EL1&0 (guest). + */ + hcr &= ~HCR_TGE; + write_sysreg(hcr, hcr_el2); +} + +static void __hyp_text __switch_to_host_role_vhe(void) +{ + u64 hcr = read_sysreg(hcr_el2); + + hcr |= HCR_TGE; + write_sysreg(hcr, hcr_el2); +} + +static hyp_alternate_select(__switch_to_hyp_role, + __switch_to_hyp_role_nvhe, + __switch_to_hyp_role_vhe, + ARM64_HAS_VIRT_HOST_EXTN); + +static hyp_alternate_select(__switch_to_host_role, + __switch_to_host_role_nvhe, + __switch_to_host_role_vhe, + ARM64_HAS_VIRT_HOST_EXTN); + +static void __hyp_text __switch_to_guest_regime(struct kvm *kvm) +{ + write_sysreg(kvm->arch.vttbr, vttbr_el2); + __switch_to_hyp_role(); + isb(); +} + +static void __hyp_text __switch_to_host_regime(void) +{ + __switch_to_host_role(); + write_sysreg(0, vttbr_el2); +} + +void __hyp_text +__kvm_emulate_tlb_invalidate(struct kvm *kvm, u32 opcode, u64 regval) +{ + kvm = kern_hyp_va(kvm); + + /* + * Switch to the guest before performing any TLB operations to + * target the appropriate VMID + */ + __switch_to_guest_regime(kvm); + + /* + * TLB maintenance operations are broadcast to + * inner-shareable domain when HCR_FB is set (default for + * KVM). + * + * Nuke all Stage 1 TLB entries for the VM. This will kill + * performance but it's always safe to do as we don't leave + * behind any strays in the TLB + */ + __tlbi(vmalle1is); + isb(); + + __switch_to_host_regime(); +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 87e7e6608cd8..12ff8cf9f18a 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -791,6 +791,20 @@ static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, return true; } +static bool emulate_tlb_invalidate(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + u32 opcode = sys_reg(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2); + + kvm_call_hyp(__kvm_emulate_tlb_invalidate, + vcpu->kvm, opcode, p->regval); + trace_kvm_tlb_invalidate(*vcpu_pc(vcpu), opcode); + ++vcpu->stat.tlb_invalidate; + + return true; +} + /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ /* DBGBVRn_EL1 */ \ @@ -842,6 +856,35 @@ static const struct sys_reg_desc sys_reg_descs[] = { { Op0(0b01), Op1(0b000), CRn(0b0111), CRm(0b1110), Op2(0b010), access_dcsw }, + /* + * ARMv8 ARM: Table C5-4 TLB maintenance instructions + * (Ref: ARMv8 ARM C5.1 version: ARM DDI 0487A.j) + */ + /* TLBI VMALLE1IS */ + { Op0(1), Op1(0), CRn(8), CRm(3), Op2(0), emulate_tlb_invalidate }, + /* TLBI VAE1IS */ + { Op0(1), Op1(0), CRn(8), CRm(3), Op2(1), emulate_tlb_invalidate }, + /* TLBI ASIDE1IS */ + { Op0(1), Op1(0), CRn(8), CRm(3), Op2(2), emulate_tlb_invalidate }, + /* TLBI VAAE1IS */ + { Op0(1), Op1(0), CRn(8), CRm(3), Op2(3), emulate_tlb_invalidate }, + /* TLBI VALE1IS */ + { Op0(1), Op1(0), CRn(8), CRm(3), Op2(5), emulate_tlb_invalidate }, + /* TLBI VAALE1IS */ + { Op0(1), Op1(0), CRn(8), CRm(3), Op2(7), emulate_tlb_invalidate }, + /* TLBI VMALLE1 */ + { Op0(1), Op1(0), CRn(8), CRm(7), Op2(0), emulate_tlb_invalidate }, + /* TLBI VAE1 */ + { Op0(1), Op1(0), CRn(8), CRm(7), Op2(1), emulate_tlb_invalidate }, + /* TLBI ASIDE1 */ + { Op0(1), Op1(0), CRn(8), CRm(7), Op2(2), emulate_tlb_invalidate }, + /* TLBI VAAE1 */ + { Op0(1), Op1(0), CRn(8), CRm(7), Op2(3), emulate_tlb_invalidate }, + /* TLBI VALE1 */ + { Op0(1), Op1(0), CRn(8), CRm(7), Op2(5), emulate_tlb_invalidate }, + /* TLBI VAALE1 */ + { Op0(1), Op1(0), CRn(8), CRm(7), Op2(7), emulate_tlb_invalidate }, + DBG_BCR_BVR_WCR_WVR_EL1(0), DBG_BCR_BVR_WCR_WVR_EL1(1), /* MDCCINT_EL1 */ @@ -1330,6 +1373,46 @@ static const struct sys_reg_desc cp15_regs[] = { { Op1( 0), CRn( 7), CRm(10), Op2( 2), access_dcsw }, { Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw }, + /* + * TLB operations + */ + /* TLBIALLIS */ + { Op1( 0), CRn( 8), CRm( 3), Op2( 0), emulate_tlb_invalidate}, + /* TLBIMVAIS */ + { Op1( 0), CRn( 8), CRm( 3), Op2( 1), emulate_tlb_invalidate}, + /* TLBIASIDIS */ + { Op1( 0), CRn( 8), CRm( 3), Op2( 2), emulate_tlb_invalidate}, + /* TLBIMVAAIS */ + { Op1( 0), CRn( 8), CRm( 3), Op2( 3), emulate_tlb_invalidate}, + /* TLBIMVALIS */ + { Op1( 0), CRn( 8), CRm( 3), Op2( 5), emulate_tlb_invalidate}, + /* TLBIMVAALIS */ + { Op1( 0), CRn( 8), CRm( 3), Op2( 7), emulate_tlb_invalidate}, + /* ITLBIALL */ + { Op1( 0), CRn( 8), CRm( 5), Op2( 0), emulate_tlb_invalidate}, + /* ITLBIMVA */ + { Op1( 0), CRn( 8), CRm( 5), Op2( 1), emulate_tlb_invalidate}, + /* ITLBIASID */ + { Op1( 0), CRn( 8), CRm( 5), Op2( 2), emulate_tlb_invalidate}, + /* DTLBIALL */ + { Op1( 0), CRn( 8), CRm( 6), Op2( 0), emulate_tlb_invalidate}, + /* DTLBIMVA */ + { Op1( 0), CRn( 8), CRm( 6), Op2( 1), emulate_tlb_invalidate}, + /* DTLBIASID */ + { Op1( 0), CRn( 8), CRm( 6), Op2( 2), emulate_tlb_invalidate}, + /* TLBIALL */ + { Op1( 0), CRn( 8), CRm( 7), Op2( 0), emulate_tlb_invalidate}, + /* TLBIMVA */ + { Op1( 0), CRn( 8), CRm( 7), Op2( 1), emulate_tlb_invalidate}, + /* TLBIASID */ + { Op1( 0), CRn( 8), CRm( 7), Op2( 2), emulate_tlb_invalidate}, + /* TLBIMVAA */ + { Op1( 0), CRn( 8), CRm( 7), Op2( 3), emulate_tlb_invalidate}, + /* TLBIMVAL */ + { Op1( 0), CRn( 8), CRm( 7), Op2( 5), emulate_tlb_invalidate}, + /* TLBIMVAAL */ + { Op1( 0), CRn( 8), CRm( 7), Op2( 7), emulate_tlb_invalidate}, + /* PMU */ { Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr }, { Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },