From patchwork Mon Jan 9 06:24:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jintack Lim X-Patchwork-Id: 9503947 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F1CD460757 for ; Mon, 9 Jan 2017 06:33:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E22102808F for ; Mon, 9 Jan 2017 06:33:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D60122811C; Mon, 9 Jan 2017 06:33:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C7822808F for ; Mon, 9 Jan 2017 06:33:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935796AbdAIGar (ORCPT ); Mon, 9 Jan 2017 01:30:47 -0500 Received: from outprodmail01.cc.columbia.edu ([128.59.72.39]:38632 "EHLO outprodmail01.cc.columbia.edu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S939836AbdAIG0l (ORCPT ); Mon, 9 Jan 2017 01:26:41 -0500 Received: from hazelnut (hazelnut.cc.columbia.edu [128.59.213.250]) by outprodmail01.cc.columbia.edu (8.14.4/8.14.4) with ESMTP id v096Q9vI018089 for ; Mon, 9 Jan 2017 01:26:32 -0500 Received: from hazelnut (localhost.localdomain [127.0.0.1]) by hazelnut (Postfix) with ESMTP id ABF2181 for ; Mon, 9 Jan 2017 01:26:32 -0500 (EST) Received: from sendprodmail03.cc.columbia.edu (sendprodmail03.cc.columbia.edu [128.59.72.15]) by hazelnut (Postfix) with ESMTP id 7F6AC87 for ; Mon, 9 Jan 2017 01:26:32 -0500 (EST) Received: from mail-qt0-f200.google.com (mail-qt0-f200.google.com [209.85.216.200]) by sendprodmail03.cc.columbia.edu (8.14.4/8.14.4) with ESMTP id v096QWol057855 (version=TLSv1/SSLv3 cipher=AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 9 Jan 2017 01:26:32 -0500 Received: by mail-qt0-f200.google.com with SMTP id f4so58601098qte.1 for ; Sun, 08 Jan 2017 22:26:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=S+kXwpmHgmfokyLd0MTd/xD4xgduEW0DCcohb03khj4=; b=OrP5SMtbzu68RJY4a401rPkDVu7gYby/LguC7YU2PYLIsSiFqlPVySsqcgozYqC+K1 gE8UUCN+RGvmkvXSXh/WBwf7GIsRAGDViPOzFUvLbxF+igQ2PF9cpzDKK9ntP+np5glQ zRRVoADkwNlMZ3nY5LiNBkuSDH1EK2DAGX/qZNA3uLEZm3FfeDYjefHz74ro8XuiKkJy cc3pGuKf75WJr0cMWIFVsc3yCIqCzarvj6+aMvcRjiC1sipHo+m66TjyNV46V9jsg2Ao horcmKYK5QFEXpWc7R8up1sxnpqaeuXOnBOCXFe609ddskDOvtPmwFtXOfb0aKa/vIQ5 BPfA== X-Gm-Message-State: AIkVDXKsJIbTvrv9l9640CLzej4BHars0tlHj9FnhtkLoNCsEkDI+gnoQNqiBRKKS74jPiTVWlkmLBtNoS3J+0wJPdQQZuBUlLUG0z7hXBGTiAWuN84+809HNsa9r9ovUTHM6A0Ol/fZK64= X-Received: by 10.55.104.22 with SMTP id d22mr70829184qkc.127.1483943192104; Sun, 08 Jan 2017 22:26:32 -0800 (PST) X-Received: by 10.55.104.22 with SMTP id d22mr70829110qkc.127.1483943190701; Sun, 08 Jan 2017 22:26:30 -0800 (PST) Received: from jintack.cs.columbia.edu ([2001:18d8:ffff:16:21a:4aff:feaa:f900]) by smtp.gmail.com with ESMTPSA id h3sm8623257qtc.6.2017.01.08.22.26.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 08 Jan 2017 22:26:30 -0800 (PST) From: Jintack Lim To: christoffer.dall@linaro.org, marc.zyngier@arm.com, pbonzini@redhat.com, rkrcmar@redhat.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com, vladimir.murzin@arm.com, suzuki.poulose@arm.com, mark.rutland@arm.com, james.morse@arm.com, lorenzo.pieralisi@arm.com, kevin.brodsky@arm.com, wcohen@redhat.com, shankerd@codeaurora.org, geoff@infradead.org, andre.przywara@arm.com, eric.auger@redhat.com, anna-maria@linutronix.de, shihwei@cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: jintack@cs.columbia.edu Subject: [RFC 47/55] KVM: arm/arm64: Forward the guest hypervisor's stage 2 permission faults Date: Mon, 9 Jan 2017 01:24:43 -0500 Message-Id: <1483943091-1364-48-git-send-email-jintack@cs.columbia.edu> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1483943091-1364-1-git-send-email-jintack@cs.columbia.edu> References: <1483943091-1364-1-git-send-email-jintack@cs.columbia.edu> X-No-Spam-Score: Local X-Scanned-By: MIMEDefang 2.78 on 128.59.72.15 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall When faulting on a shadow stage 2 page table we have to check if the fault was a permission fault and if so, if that fault needs to be handled by the guest hypervisor before us, in case the guest hypervisor has created a less permissive S2 entry than the operation required. Check if this is the case, and inject a fault if it is. Signed-off-by: Christoffer Dall Signed-off-by: Jintack Lim --- arch/arm/include/asm/kvm_mmu.h | 7 +++++++ arch/arm/kvm/mmu.c | 5 +++++ arch/arm64/include/asm/kvm_mmu.h | 9 +++++++++ arch/arm64/kvm/mmu-nested.c | 33 +++++++++++++++++++++++++++++++++ 4 files changed, 54 insertions(+) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index ab41a10..0d106ae 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -241,6 +241,13 @@ static inline int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa, return 0; } +static inline int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, + phys_addr_t fault_ipa, + struct kvm_s2_trans *trans) +{ + return 0; +} + static inline void kvm_nested_s2_unmap(struct kvm_vcpu *vcpu) { } static inline int kvm_nested_s2_init(struct kvm_vcpu *vcpu) { return 0; } static inline void kvm_nested_s2_teardown(struct kvm_vcpu *vcpu) { } diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index abdf345..68fc8e8 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -1542,6 +1542,11 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) ret = kvm_walk_nested_s2(vcpu, fault_ipa, &nested_trans); if (ret) goto out_unlock; + + ret = kvm_s2_handle_perm_fault(vcpu, fault_ipa, &nested_trans); + if (ret) + goto out_unlock; + ipa = nested_trans.output; } diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 2ac603d..2086296 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -338,6 +338,8 @@ struct kvm_s2_trans { bool handle_vttbr_update(struct kvm_vcpu *vcpu, u64 vttbr); int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa, struct kvm_s2_trans *result); +int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + struct kvm_s2_trans *trans); void kvm_nested_s2_unmap(struct kvm_vcpu *vcpu); int kvm_nested_s2_init(struct kvm_vcpu *vcpu); void kvm_nested_s2_teardown(struct kvm_vcpu *vcpu); @@ -366,6 +368,13 @@ static inline int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa, return 0; } +static inline int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, + phys_addr_t fault_ipa, + struct kvm_s2_trans *trans) +{ + return 0; +} + static inline void kvm_nested_s2_unmap(struct kvm_vcpu *vcpu) { } static inline int kvm_nested_s2_init(struct kvm_vcpu *vcpu) { return 0; } static inline void kvm_nested_s2_teardown(struct kvm_vcpu *vcpu) { } diff --git a/arch/arm64/kvm/mmu-nested.c b/arch/arm64/kvm/mmu-nested.c index b579d23..65ad0da 100644 --- a/arch/arm64/kvm/mmu-nested.c +++ b/arch/arm64/kvm/mmu-nested.c @@ -52,6 +52,19 @@ static unsigned int pa_max(void) return ps_to_output_size(parange); } +static int vcpu_inject_s2_perm_fault(struct kvm_vcpu *vcpu, gpa_t ipa, + int level) +{ + u32 esr; + + vcpu->arch.ctxt.el2_regs[FAR_EL2] = vcpu->arch.fault.far_el2; + vcpu->arch.ctxt.el2_regs[HPFAR_EL2] = vcpu->arch.fault.hpfar_el2; + esr = kvm_vcpu_get_hsr(vcpu) & ~ESR_ELx_FSC; + esr |= ESR_ELx_FSC_PERM; + esr |= level & 0x3; + return kvm_inject_nested_sync(vcpu, esr); +} + static int vcpu_inject_s2_trans_fault(struct kvm_vcpu *vcpu, gpa_t ipa, int level) { @@ -268,6 +281,26 @@ int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa, return walk_nested_s2_pgd(vcpu, gipa, &wi, result); } +/* + * Returns non-zero if permission fault is handled by injecting it to the next + * level hypervisor. + */ +int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + struct kvm_s2_trans *trans) +{ + unsigned long fault_status = kvm_vcpu_trap_get_fault_type(vcpu); + bool write_fault = kvm_is_write_fault(vcpu); + + if (fault_status != FSC_PERM) + return 0; + + if ((write_fault && !trans->writable) || + (!write_fault && !trans->readable)) + return vcpu_inject_s2_perm_fault(vcpu, fault_ipa, trans->level); + + return 0; +} + /* expects kvm->mmu_lock to be held */ void kvm_nested_s2_all_vcpus_wp(struct kvm *kvm) {