From patchwork Mon Jan 9 06:24:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jintack Lim X-Patchwork-Id: 9504107 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9060460710 for ; Mon, 9 Jan 2017 06:55:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7F3AB280D0 for ; Mon, 9 Jan 2017 06:55:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 72BFB2815E; Mon, 9 Jan 2017 06:55:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2603B280D0 for ; Mon, 9 Jan 2017 06:55:35 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cQTrd-0000Tg-3S; Mon, 09 Jan 2017 06:55:29 +0000 Received: from merlin.infradead.org ([2001:4978:20e::2]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cQTrR-0000DQ-3y for linux-arm-kernel@bombadil.infradead.org; Mon, 09 Jan 2017 06:55:17 +0000 Received: from outprodmail01.cc.columbia.edu ([128.59.72.39]) by merlin.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cQTPv-0005KE-Te for linux-arm-kernel@lists.infradead.org; Mon, 09 Jan 2017 06:26:52 +0000 Received: from hazelnut (hazelnut.cc.columbia.edu [128.59.213.250]) by outprodmail01.cc.columbia.edu (8.14.4/8.14.4) with ESMTP id v096Q5Hw017971 for ; Mon, 9 Jan 2017 01:26:33 -0500 Received: from hazelnut (localhost.localdomain [127.0.0.1]) by hazelnut (Postfix) with ESMTP id DA6397E for ; Mon, 9 Jan 2017 01:26:33 -0500 (EST) Received: from sendprodmail02.cc.columbia.edu (sendprodmail02.cc.columbia.edu [128.59.72.14]) by hazelnut (Postfix) with ESMTP id B146F82 for ; Mon, 9 Jan 2017 01:26:33 -0500 (EST) Received: from mail-qt0-f197.google.com (mail-qt0-f197.google.com [209.85.216.197]) by sendprodmail02.cc.columbia.edu (8.14.4/8.14.4) with ESMTP id v096QXO6043060 (version=TLSv1/SSLv3 cipher=AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 9 Jan 2017 01:26:33 -0500 Received: by mail-qt0-f197.google.com with SMTP id l7so63081470qtd.2 for ; Sun, 08 Jan 2017 22:26:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pBoYn7AGcV6yIgQVRGmhf7m8PNt401Ix7tyOglmAo2o=; b=OPMdq4IWW5VGPOHdhdp8y16kFQeRZf4VyzO9eeExfDkKdTSJpgWysxG8VZTFVyANiH vCLEtZWFBnd5fP6HRLQB3SBysi6STroYhwadkz/T++oST1QAv7dzEJo8tuZyndLKl2bN nIzG3oOWwipTEOb056lgnggOSfHSCQpH5Qxf+jB+zkQE5eaqf1HOv5LH0mvbtHFdUOj+ tPC/bzHuie+MEEmnfmoBrCxnaLZzJod9v+CuxOQIDnec5wNFafont/FCRQq958J4CdAF j/5JwkvpO3dNCsQe83yNtTXtMgxauI7Oy9RcIsMsmAv7/A9o5QV8H6CRapWznbuNSuxS wOBQ== X-Gm-Message-State: AIkVDXJJ2YKZiQpiLw41EmqKRNBkC6XeQsT84G/SOs/ps6yID19xI1ShbVsoczIk08nvTKxAPdC3vpKV/a1U0fF47474RjaCvrEpbPLCnmzszmwy4WPfINntkMn5VmgZbmx47PL8bkmd3kWow2CoO24uwXM1hor5BR7mLQ== X-Received: by 10.55.122.197 with SMTP id v188mr79430403qkc.120.1483943193324; Sun, 08 Jan 2017 22:26:33 -0800 (PST) X-Received: by 10.55.122.197 with SMTP id v188mr79430375qkc.120.1483943193148; Sun, 08 Jan 2017 22:26:33 -0800 (PST) Received: from jintack.cs.columbia.edu ([2001:18d8:ffff:16:21a:4aff:feaa:f900]) by smtp.gmail.com with ESMTPSA id h3sm8623257qtc.6.2017.01.08.22.26.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 08 Jan 2017 22:26:32 -0800 (PST) From: Jintack Lim To: christoffer.dall@linaro.org, marc.zyngier@arm.com, pbonzini@redhat.com, rkrcmar@redhat.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, vladimir.murzin@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, james.morse@arm.com, lorenzo.pieralisi@arm.com, kevin.brodsky@arm.com, wcohen@redhat.com, shankerd@codeaurora.org, geoff@infradead.org, andre.przywara@arm.com, eric.auger@redhat.com, anna-maria@linutronix.de, shihwei@cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC 49/55] KVM: arm64: Fixes to toggle_cache for nesting Date: Mon, 9 Jan 2017 01:24:45 -0500 Message-Id: <1483943091-1364-50-git-send-email-jintack@cs.columbia.edu> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1483943091-1364-1-git-send-email-jintack@cs.columbia.edu> References: <1483943091-1364-1-git-send-email-jintack@cs.columbia.edu> X-No-Spam-Score: Local X-Scanned-By: MIMEDefang 2.78 on 128.59.72.14 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170109_012652_041851_4E74DB02 X-CRM114-Status: GOOD ( 17.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jintack@cs.columbia.edu MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall So far we were flushing almost the entire universe whenever a VM would load/unload the SCTLR_EL1 and the two versions of that register had different MMU enabled settings. This turned out to be so slow that it prevented forward progress for a nested VM, because a scheduler timer tick interrupt would always be pending when we reached the nested VM. To avoid this problem, we consider the SCTLR_EL2 when evaluating if caches are on or off when entering virtual EL2 (because this is the value that we end up shadowing onto the hardware EL1 register). We also reduce the scope of the flush operation to only flush shadow stage 2 page table state of the particular VCPU toggling the caches instead of the shadow stage 2 state of all possible VCPUs. Signed-off-by: Christoffer Dall Signed-off-by: Jintack Lim --- arch/arm/kvm/mmu.c | 31 ++++++++++++++++++++++++++++++- arch/arm64/include/asm/kvm_mmu.h | 7 ++++++- 2 files changed, 36 insertions(+), 2 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 68fc8e8..344bc01 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -422,6 +422,35 @@ static void stage2_flush_vm(struct kvm *kvm) srcu_read_unlock(&kvm->srcu, idx); } +/** + * Same as above but only flushed shadow state for specific vcpu + */ +static void stage2_flush_vcpu(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = vcpu->kvm; + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int idx; + struct kvm_nested_s2_mmu __maybe_unused *nested_mmu; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) + stage2_flush_memslot(&kvm->arch.mmu, memslot); + +#ifdef CONFIG_KVM_ARM_NESTED_HYP + list_for_each_entry_rcu(nested_mmu, &vcpu->kvm->arch.nested_mmu_list, + list) { + kvm_stage2_flush_range(&nested_mmu->mmu, 0, KVM_PHYS_SIZE); + } +#endif + + spin_unlock(&kvm->mmu_lock); + srcu_read_unlock(&kvm->srcu, idx); +} + static void clear_hyp_pgd_entry(pgd_t *pgd) { pud_t *pud_table __maybe_unused = pud_offset(pgd, 0UL); @@ -2074,7 +2103,7 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled) * Clean + invalidate does the trick always. */ if (now_enabled != was_enabled) - stage2_flush_vm(vcpu->kvm); + stage2_flush_vcpu(vcpu); /* Caches are now on, stop trapping VM ops (until a S/W op) */ if (now_enabled) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 2086296..7754f3e 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -241,7 +241,12 @@ static inline bool kvm_page_empty(void *ptr) static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) { - return (vcpu_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101; + u32 mode = vcpu->arch.ctxt.gp_regs.regs.pstate & PSR_MODE_MASK; + + if (mode != PSR_MODE_EL2h && mode != PSR_MODE_EL2t) + return (vcpu_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101; + else + return (vcpu_el2_reg(vcpu, SCTLR_EL2) & 0b101) == 0b101; } static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,