From patchwork Mon Jan 9 06:24:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jintack Lim X-Patchwork-Id: 9503911 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5A64C60757 for ; Mon, 9 Jan 2017 06:31:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4ACDC280D0 for ; Mon, 9 Jan 2017 06:31:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3F48B2811C; Mon, 9 Jan 2017 06:31:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1B9592823E for ; Mon, 9 Jan 2017 06:31:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1163578AbdAIGbD (ORCPT ); Mon, 9 Jan 2017 01:31:03 -0500 Received: from outprodmail02.cc.columbia.edu ([128.59.72.51]:52318 "EHLO outprodmail02.cc.columbia.edu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S939767AbdAIG0j (ORCPT ); Mon, 9 Jan 2017 01:26:39 -0500 Received: from hazelnut (hazelnut.cc.columbia.edu [128.59.213.250]) by outprodmail02.cc.columbia.edu (8.14.4/8.14.4) with ESMTP id v096PXod005357 for ; Mon, 9 Jan 2017 01:26:23 -0500 Received: from hazelnut (localhost.localdomain [127.0.0.1]) by hazelnut (Postfix) with ESMTP id 3DF497E for ; Mon, 9 Jan 2017 01:26:23 -0500 (EST) Received: from sendprodmail04.cc.columbia.edu (sendprodmail04.cc.columbia.edu [128.59.72.16]) by hazelnut (Postfix) with ESMTP id 12CCC85 for ; Mon, 9 Jan 2017 01:26:23 -0500 (EST) Received: from mail-qt0-f198.google.com (mail-qt0-f198.google.com [209.85.216.198]) by sendprodmail04.cc.columbia.edu (8.14.4/8.14.4) with ESMTP id v096QMMH005558 (version=TLSv1/SSLv3 cipher=AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 9 Jan 2017 01:26:22 -0500 Received: by mail-qt0-f198.google.com with SMTP id g49so26223194qta.0 for ; Sun, 08 Jan 2017 22:26:22 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=fv3DChUWXpxWGYofNdZPu15WoW1We7uATqSW0LMaPpE=; b=ISwkgM4U2m/ieXhjAftnx8Qr10LSAC/h+Qs5VjsIbYooudvC10KMYGWolUNZuxC3Lr j1woYyoQUuR89PHYIn+Tezrgoat1451pcwCaKxABW+H7XB93OMYFkcEPQQ+ldwu2L/6R UG2cJZhDHiXK9DWjEbgsqUl6a4y3OZPqvC7nha9P1jtJ+bLVWRKtnQ3yimB4J7hBa07b 8shhfxJLYV8tiyNzUbBdSDj5Bek4qWNQPOZeUK8ravkrT0AIvpt0k6LNI+acIL+mERem wYr0w0139VcejQVJvdQpxTiNiuF9cmx7HUJKWrcVPonhOQfVwvK8zrP2zfjg5f6F6juh E+xA== X-Gm-Message-State: AIkVDXJEKvjMrpNcANrfKEc6AODDHnKqwE2YP3dJNN6uUwXSdlu0B0xvG2ePzG6DKey3ErcdrvfTixyio1ERttjSxn0R1GnQWaSjl6WPmLNlyKmkbWd5XgS6QiRHv9F0/m7ZZZjtHAkJbX0= X-Received: by 10.55.39.10 with SMTP id n10mr83650807qkn.103.1483943182656; Sun, 08 Jan 2017 22:26:22 -0800 (PST) X-Received: by 10.55.39.10 with SMTP id n10mr83650792qkn.103.1483943182470; Sun, 08 Jan 2017 22:26:22 -0800 (PST) Received: from jintack.cs.columbia.edu ([2001:18d8:ffff:16:21a:4aff:feaa:f900]) by smtp.gmail.com with ESMTPSA id h3sm8623257qtc.6.2017.01.08.22.26.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 08 Jan 2017 22:26:21 -0800 (PST) From: Jintack Lim To: christoffer.dall@linaro.org, marc.zyngier@arm.com, pbonzini@redhat.com, rkrcmar@redhat.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, vladimir.murzin@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, james.morse@arm.com, lorenzo.pieralisi@arm.com, kevin.brodsky@arm.com, wcohen@redhat.com, shankerd@codeaurora.org, geoff@infradead.org, andre.przywara@arm.com, eric.auger@redhat.com, anna-maria@linutronix.de, shihwei@cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: jintack@cs.columbia.edu Subject: [RFC 40/55] KVM: arm/arm64: Handle vttbr_el2 write operation from the guest hypervisor Date: Mon, 9 Jan 2017 01:24:36 -0500 Message-Id: <1483943091-1364-41-git-send-email-jintack@cs.columbia.edu> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1483943091-1364-1-git-send-email-jintack@cs.columbia.edu> References: <1483943091-1364-1-git-send-email-jintack@cs.columbia.edu> X-No-Spam-Score: Local X-Scanned-By: MIMEDefang 2.78 on 128.59.72.16 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Each nested VM is supposed to have a mmu (i.e. shadow stage-2 page table), and we create it when the guest hypervisor writes to vttbr_el2 with a new vmid. In case the guest hypervisor writes to vttbr_el2 with existing vmid, we check if the base address is changed. If so, then what we have in the shadow page table is not valid any more. So ummap it. Signed-off-by: Jintack Lim --- arch/arm/include/asm/kvm_host.h | 1 + arch/arm/kvm/arm.c | 1 + arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_mmu.h | 6 ++++ arch/arm64/kvm/mmu-nested.c | 71 +++++++++++++++++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 15 ++++++++- 6 files changed, 94 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index fbde48d..ebf2810 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -84,6 +84,7 @@ struct kvm_arch { /* Never used on arm but added to be compatible with arm64 */ struct list_head nested_mmu_list; + spinlock_t mmu_list_lock; /* Interrupt controller */ struct vgic_dist vgic; diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 147df97..6fa5754 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -147,6 +147,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm->arch.mmu.vmid.vmid_gen = 0; kvm->arch.mmu.el2_vmid.vmid_gen = 0; INIT_LIST_HEAD(&kvm->arch.nested_mmu_list); + spin_lock_init(&kvm->arch.mmu_list_lock); /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->arch.max_vcpus = vgic_present ? diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 23e2267..52eea76 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -99,6 +99,7 @@ struct kvm_arch { /* Stage 2 shadow paging contexts for nested L2 VM */ struct list_head nested_mmu_list; + spinlock_t mmu_list_lock; }; #define KVM_NR_MEM_OBJS 40 diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index d1ef650..fdc9327 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -327,6 +327,7 @@ static inline unsigned int kvm_get_vmid_bits(void) #ifdef CONFIG_KVM_ARM_NESTED_HYP struct kvm_nested_s2_mmu *get_nested_mmu(struct kvm_vcpu *vcpu, u64 vttbr); struct kvm_s2_mmu *vcpu_get_active_s2_mmu(struct kvm_vcpu *vcpu); +bool handle_vttbr_update(struct kvm_vcpu *vcpu, u64 vttbr); #else static inline struct kvm_nested_s2_mmu *get_nested_mmu(struct kvm_vcpu *vcpu, u64 vttbr) @@ -337,6 +338,11 @@ static inline struct kvm_s2_mmu *vcpu_get_active_s2_mmu(struct kvm_vcpu *vcpu) { return &vcpu->kvm->arch.mmu; } + +static inline bool handle_vttbr_update(struct kvm_vcpu *vcpu, u64 vttbr) +{ + return false; +} #endif static inline u64 kvm_get_vttbr(struct kvm_s2_vmid *vmid, diff --git a/arch/arm64/kvm/mmu-nested.c b/arch/arm64/kvm/mmu-nested.c index d52078f..0811d94 100644 --- a/arch/arm64/kvm/mmu-nested.c +++ b/arch/arm64/kvm/mmu-nested.c @@ -53,3 +53,74 @@ struct kvm_s2_mmu *vcpu_get_active_s2_mmu(struct kvm_vcpu *vcpu) return &nested_mmu->mmu; } + +static struct kvm_nested_s2_mmu *create_nested_mmu(struct kvm_vcpu *vcpu, + u64 vttbr) +{ + struct kvm_nested_s2_mmu *nested_mmu, *tmp_mmu; + struct list_head *nested_mmu_list = &vcpu->kvm->arch.nested_mmu_list; + bool need_free = false; + int ret; + + nested_mmu = kzalloc(sizeof(struct kvm_nested_s2_mmu), GFP_KERNEL); + if (!nested_mmu) + return NULL; + + ret = __kvm_alloc_stage2_pgd(&nested_mmu->mmu); + if (ret) { + kfree(nested_mmu); + return NULL; + } + + spin_lock(&vcpu->kvm->arch.mmu_list_lock); + tmp_mmu = get_nested_mmu(vcpu, vttbr); + if (!tmp_mmu) + list_add_rcu(&nested_mmu->list, nested_mmu_list); + else /* Somebody already created and put a new nested_mmu to the list */ + need_free = true; + spin_unlock(&vcpu->kvm->arch.mmu_list_lock); + + if (need_free) { + __kvm_free_stage2_pgd(&nested_mmu->mmu); + kfree(nested_mmu); + nested_mmu = tmp_mmu; + } + + return nested_mmu; +} + +static void kvm_nested_s2_unmap(struct kvm_vcpu *vcpu) +{ + struct kvm_nested_s2_mmu *nested_mmu; + struct list_head *nested_mmu_list = &vcpu->kvm->arch.nested_mmu_list; + + list_for_each_entry_rcu(nested_mmu, nested_mmu_list, list) + kvm_unmap_stage2_range(&nested_mmu->mmu, 0, KVM_PHYS_SIZE); +} + +bool handle_vttbr_update(struct kvm_vcpu *vcpu, u64 vttbr) +{ + struct kvm_nested_s2_mmu *nested_mmu; + + /* See if we can relax this */ + if (!vttbr) + return true; + + nested_mmu = (struct kvm_nested_s2_mmu *)get_nested_mmu(vcpu, vttbr); + if (!nested_mmu) { + nested_mmu = create_nested_mmu(vcpu, vttbr); + if (!nested_mmu) + return false; + } else { + /* + * unmap the shadow page table if vttbr_el2 is + * changed to different value + */ + if (vttbr != nested_mmu->virtual_vttbr) + kvm_nested_s2_unmap(vcpu); + } + + nested_mmu->virtual_vttbr = vttbr; + + return true; +} diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index e66f40d..ddb641c 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -960,6 +960,19 @@ static bool access_cpacr(struct kvm_vcpu *vcpu, return true; } +static bool access_vttbr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + u64 vttbr = p->regval; + + if (!p->is_write) { + p->regval = vcpu_el2_reg(vcpu, r->reg); + return true; + } + + return handle_vttbr_update(vcpu, vttbr); +} + static bool trap_el2_reg(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) @@ -1306,7 +1319,7 @@ static bool trap_el2_reg(struct kvm_vcpu *vcpu, trap_el2_reg, reset_el2_val, TCR_EL2, 0 }, /* VTTBR_EL2 */ { Op0(0b11), Op1(0b100), CRn(0b0010), CRm(0b0001), Op2(0b000), - trap_el2_reg, reset_el2_val, VTTBR_EL2, 0 }, + access_vttbr, reset_el2_val, VTTBR_EL2, 0 }, /* VTCR_EL2 */ { Op0(0b11), Op1(0b100), CRn(0b0010), CRm(0b0001), Op2(0b010), trap_el2_reg, reset_el2_val, VTCR_EL2, 0 },