From patchwork Tue Oct 3 03:10:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jintack Lim X-Patchwork-Id: 9981549 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 77198602B8 for ; Tue, 3 Oct 2017 03:17:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6A5242887F for ; Tue, 3 Oct 2017 03:17:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5F4C428885; Tue, 3 Oct 2017 03:17:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,RCVD_IN_DNSWL_HI,RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A7DF22887F for ; Tue, 3 Oct 2017 03:17:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751917AbdJCDLz (ORCPT ); Mon, 2 Oct 2017 23:11:55 -0400 Received: from mail-it0-f43.google.com ([209.85.214.43]:54272 "EHLO mail-it0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751901AbdJCDLt (ORCPT ); Mon, 2 Oct 2017 23:11:49 -0400 Received: by mail-it0-f43.google.com with SMTP id e134so9712315ite.3 for ; Mon, 02 Oct 2017 20:11:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MQcM1FcXOvspeWAnNJawmA0pVa8r9+tl5UYC5/lqzjE=; b=TRpn3e+2rCiow3pZdAdv+YO53xI1ool+eZKvZrcbYOY97xrR1us3q1GuKSdJUeCtqe og8FwBNSR4KkQa30Obtk/LIkw8xb+n9ToMl5jG8u5byAz7tjztYXtsCH+XKop6QhJaWj QQ8ZBhVhsYj9TWqWMuxnElJE/WArDh3YnvQwo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MQcM1FcXOvspeWAnNJawmA0pVa8r9+tl5UYC5/lqzjE=; b=YU7FOyBffL9StNkxk5DCAXcvEb/m/zCE0vmc+eOTAsX+d/md0YtF26HGxBNSMdVdKa KJ5rtb5+yGaHPoaqcIt9nbx/sHDwXP9wslM0gQ4MAtm5N48q/NO+V63BsNOnjftGo6ED Y4Qn5R8g1E7v61YR3W0J4mLoHs5jtOqLlb116j0Ve49B7Ai9XSSsOiuvecjP0YytsBEF 4YOf413nslQPhas0jsLMLLNDNEy0JAc8WOXV/M2xsmYj3pqYFYVRJYteoHQqdaExnImx 4MAwlHReIoV5+qfsQoWahw3sGPwT6QQtzSANoRkqUEh4m6IpaAFnlEMQ+Scir55iR2lv Civw== X-Gm-Message-State: AHPjjUiyL1LP43z55RYZDDZFMsk2qf1PeXRtOysvzG+le9bBHpcIVamn UXkQSsTUJmsPSpGicBmbujFHPA== X-Google-Smtp-Source: AOwi7QBs3oYX0GYs1RFm66nB92LhQLT3RJU2q1cBjLSi3Yk3I8zaVK0BC6Pw0w7m76ENmoDGHQpa2A== X-Received: by 10.36.77.66 with SMTP id l63mr22561239itb.49.1507000308630; Mon, 02 Oct 2017 20:11:48 -0700 (PDT) Received: from node.jintackl-qv28633.kvmarm-pg0.wisc.cloudlab.us (c220g1-031126.wisc.cloudlab.us. [128.104.222.76]) by smtp.gmail.com with ESMTPSA id h84sm5367193iod.72.2017.10.02.20.11.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 02 Oct 2017 20:11:48 -0700 (PDT) From: Jintack Lim To: christoffer.dall@linaro.org, marc.zyngier@arm.com, kvmarm@lists.cs.columbia.edu Cc: jintack@cs.columbia.edu, pbonzini@redhat.com, rkrcmar@redhat.com, catalin.marinas@arm.com, will.deacon@arm.com, linux@armlinux.org.uk, mark.rutland@arm.com, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jintack Lim Subject: [RFC PATCH v2 10/31] KVM: arm/arm64: Unmap/flush shadow stage 2 page tables Date: Mon, 2 Oct 2017 22:10:52 -0500 Message-Id: <1507000273-3735-8-git-send-email-jintack.lim@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1507000273-3735-1-git-send-email-jintack.lim@linaro.org> References: <1507000273-3735-1-git-send-email-jintack.lim@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall Unmap/flush shadow stage 2 page tables for the nested VMs as well as the stage 2 page table for the guest hypervisor. Note: A bunch of the code in mmu.c relating to MMU notifiers is currently dealt with in an extremely abrupt way, for example by clearing out an entire shadow stage-2 table. This will be handled in a more efficient way using the reverse mapping feature in a later version of the patch series. Signed-off-by: Christoffer Dall Signed-off-by: Jintack Lim --- Notes: v1-->v2: - Removed an unnecessary iteration for each vcpu in kvm_nested_s2_all_vcpus_*() functions and remove all_vcpus in the function names; a list of nested mmu is per VM, not per vcpu. - Renamed kvm_nested_s2_unmap() to kvm_nested_s2_clear() - Renamed kvm_nested_s2_teardown() to kvm_nested_s2_free() - Removed the unused kvm_nested_s2_init() function. arch/arm/include/asm/kvm_mmu.h | 6 ++++++ arch/arm64/include/asm/kvm_mmu.h | 5 +++++ arch/arm64/kvm/mmu-nested.c | 40 ++++++++++++++++++++++++++++++++++++++++ virt/kvm/arm/arm.c | 6 +++++- virt/kvm/arm/mmu.c | 17 +++++++++++++++++ 5 files changed, 73 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 86fdc70..d3eafc5 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -221,6 +221,12 @@ static inline unsigned int kvm_get_vmid_bits(void) return 8; } +static inline void kvm_nested_s2_unmap(struct kvm_vcpu *vcpu) { } +static inline void kvm_nested_s2_free(struct kvm *kvm) { } +static inline void kvm_nested_s2_wp(struct kvm *kvm) { } +static inline void kvm_nested_s2_clear(struct kvm *kvm) { } +static inline void kvm_nested_s2_flush(struct kvm *kvm) { } + static inline u64 kvm_get_vttbr(struct kvm_s2_vmid *vmid, struct kvm_s2_mmu *mmu) { diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 452912f..7fc7a83 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -325,6 +325,11 @@ static inline unsigned int kvm_get_vmid_bits(void) struct kvm_nested_s2_mmu *get_nested_mmu(struct kvm_vcpu *vcpu, u64 vttbr); struct kvm_s2_mmu *vcpu_get_active_s2_mmu(struct kvm_vcpu *vcpu); void update_nested_s2_mmu(struct kvm_vcpu *vcpu); +void kvm_nested_s2_unmap(struct kvm_vcpu *vcpu); +void kvm_nested_s2_free(struct kvm *kvm); +void kvm_nested_s2_wp(struct kvm *kvm); +void kvm_nested_s2_clear(struct kvm *kvm); +void kvm_nested_s2_flush(struct kvm *kvm); static inline u64 kvm_get_vttbr(struct kvm_s2_vmid *vmid, struct kvm_s2_mmu *mmu) diff --git a/arch/arm64/kvm/mmu-nested.c b/arch/arm64/kvm/mmu-nested.c index c436daf..3ee20f2 100644 --- a/arch/arm64/kvm/mmu-nested.c +++ b/arch/arm64/kvm/mmu-nested.c @@ -1,6 +1,7 @@ /* * Copyright (C) 2017 - Columbia University and Linaro Ltd. * Author: Jintack Lim + * Author: Christoffer Dall * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -21,6 +22,45 @@ #include #include +/* expects kvm->mmu_lock to be held */ +void kvm_nested_s2_wp(struct kvm *kvm) +{ + struct kvm_nested_s2_mmu *nested_mmu; + struct list_head *nested_mmu_list = &kvm->arch.nested_mmu_list; + + list_for_each_entry_rcu(nested_mmu, nested_mmu_list, list) + kvm_stage2_wp_range(kvm, &nested_mmu->mmu, 0, KVM_PHYS_SIZE); +} + +/* expects kvm->mmu_lock to be held */ +void kvm_nested_s2_clear(struct kvm *kvm) +{ + struct kvm_nested_s2_mmu *nested_mmu; + struct list_head *nested_mmu_list = &kvm->arch.nested_mmu_list; + + list_for_each_entry_rcu(nested_mmu, nested_mmu_list, list) + kvm_unmap_stage2_range(kvm, &nested_mmu->mmu, 0, KVM_PHYS_SIZE); +} + +/* expects kvm->mmu_lock to be held */ +void kvm_nested_s2_flush(struct kvm *kvm) +{ + struct kvm_nested_s2_mmu *nested_mmu; + struct list_head *nested_mmu_list = &kvm->arch.nested_mmu_list; + + list_for_each_entry_rcu(nested_mmu, nested_mmu_list, list) + kvm_stage2_flush_range(&nested_mmu->mmu, 0, KVM_PHYS_SIZE); +} + +void kvm_nested_s2_free(struct kvm *kvm) +{ + struct kvm_nested_s2_mmu *nested_mmu; + struct list_head *nested_mmu_list = &kvm->arch.nested_mmu_list; + + list_for_each_entry_rcu(nested_mmu, nested_mmu_list, list) + __kvm_free_stage2_pgd(kvm, &nested_mmu->mmu); +} + static struct kvm_nested_s2_mmu *lookup_nested_mmu(struct kvm_vcpu *vcpu, u64 vttbr) { diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 4548d77..08706f8 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -187,6 +187,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm) free_percpu(kvm->arch.last_vcpu_ran); kvm->arch.last_vcpu_ran = NULL; + kvm_nested_s2_free(kvm); + for (i = 0; i < KVM_MAX_VCPUS; ++i) { if (kvm->vcpus[i]) { kvm_arch_vcpu_free(kvm->vcpus[i]); @@ -926,8 +928,10 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, * Ensure a rebooted VM will fault in RAM pages and detect if the * guest MMU is turned off and flush the caches as needed. */ - if (vcpu->arch.has_run_once) + if (vcpu->arch.has_run_once) { stage2_unmap_vm(vcpu->kvm); + kvm_nested_s2_clear(vcpu->kvm); + } vcpu_reset_hcr(vcpu); diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index ca10799..3143f81 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -434,6 +434,8 @@ static void stage2_flush_vm(struct kvm *kvm) kvm_for_each_memslot(memslot, slots) stage2_flush_memslot(&kvm->arch.mmu, memslot); + kvm_nested_s2_flush(kvm); + spin_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, idx); } @@ -1268,6 +1270,7 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) spin_lock(&kvm->mmu_lock); kvm_stage2_wp_range(kvm, &kvm->arch.mmu, start, end); + kvm_nested_s2_wp(kvm); spin_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs(kvm); } @@ -1306,6 +1309,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, gfn_t gfn_offset, unsigned long mask) { kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); + kvm_nested_s2_wp(kvm); } static void coherent_cache_guest_page(struct kvm_vcpu *vcpu, kvm_pfn_t pfn, @@ -1643,6 +1647,7 @@ static int handle_hva_to_gpa(struct kvm *kvm, static int kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data) { kvm_unmap_stage2_range(kvm, &kvm->arch.mmu, gpa, size); + kvm_nested_s2_clear(kvm); return 0; } @@ -1682,6 +1687,7 @@ static int kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data * through this calling path. */ stage2_set_pte(&kvm->arch.mmu, NULL, gpa, pte, 0); + kvm_nested_s2_clear(kvm); return 0; } @@ -1716,6 +1722,11 @@ static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data) if (pte_none(*pte)) return 0; + /* + * TODO: Handle nested_mmu structures here using the reverse mapping in + * a later version of patch series. + */ + return stage2_ptep_test_and_clear_young(pte); } @@ -1736,6 +1747,11 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void * if (!pte_none(*pte)) /* Just a page... */ return pte_young(*pte); + /* + * TODO: Handle nested_mmu structures here using the reverse mapping in + * a later version of patch series. + */ + return 0; } @@ -1992,6 +2008,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, spin_lock(&kvm->mmu_lock); kvm_unmap_stage2_range(kvm, &kvm->arch.mmu, gpa, size); + kvm_nested_s2_clear(kvm); spin_unlock(&kvm->mmu_lock); }