From patchwork Thu Aug 16 15:30:19 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 1333581 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 78720DFB34 for ; Thu, 16 Aug 2012 15:30:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757557Ab2HPPaZ (ORCPT ); Thu, 16 Aug 2012 11:30:25 -0400 Received: from mail-qa0-f46.google.com ([209.85.216.46]:39357 "EHLO mail-qa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751642Ab2HPPaW (ORCPT ); Thu, 16 Aug 2012 11:30:22 -0400 Received: by qaas11 with SMTP id s11so663949qaa.19 for ; Thu, 16 Aug 2012 08:30:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=subject:to:from:date:message-id:in-reply-to:references:user-agent :mime-version:content-type:content-transfer-encoding :x-gm-message-state; bh=18VR2BqcNW1/7IRwiLkMJhNoQWk5p0oWY0h4kSZ836w=; b=L/vHd5L7cTCGCmFfmgudVCVwmzUZuywD5IJkadqdW3pj2zbe/ALLEuFOHSMkRqMo9k /dVtUxaLQBQVw0AJfiBxD1WZPGRcF5QAnxM2saAKbSzMkCASPTH6cWIVKZiRA5xrFm89 3qtMP+QNu/geQr/GGJMKeQsIczWtMrkdjn67EcqrlIVitVvqVVfaoHTszBzBXRy1xEK9 +Qnu0xD+V73gzr+7aO1bBOsg/i7Vk3KrcS8Kb6jJk1Bu8BbiIpVDac1zrIYAeHAr/UzC TXLGT2Cku5UF4zHMPopadkszkGOwYUdyM2SqjZyvvOQ3BNv4CD8Lbi9OE2LQpMqNRo1e JowA== Received: by 10.224.184.17 with SMTP id ci17mr3951960qab.60.1345131021847; Thu, 16 Aug 2012 08:30:21 -0700 (PDT) Received: from [127.0.1.1] (pool-72-80-83-148.nycmny.fios.verizon.net. [72.80.83.148]) by mx.google.com with ESMTPS id dr6sm7001166qab.16.2012.08.16.08.30.20 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 16 Aug 2012 08:30:21 -0700 (PDT) Subject: [PATCH v10 12/14] KVM: ARM: Handle guest faults in KVM To: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org From: Christoffer Dall Date: Thu, 16 Aug 2012 11:30:19 -0400 Message-ID: <20120816153019.21484.74222.stgit@ubuntu> In-Reply-To: <20120816152637.21484.65421.stgit@ubuntu> References: <20120816152637.21484.65421.stgit@ubuntu> User-Agent: StGit/0.15 MIME-Version: 1.0 X-Gm-Message-State: ALoCoQm20JH4fWIdqhI7EMmy+9kiUy0EGsoY5nKk3rgYMitzpWzYfeh6l9XWCSIPY0LTFvP/bUPA Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Handles the guest faults in KVM by mapping in corresponding user pages in the 2nd stage page tables. Signed-off-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_arm.h | 9 ++++ arch/arm/include/asm/kvm_asm.h | 2 + arch/arm/kvm/mmu.c | 102 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 112 insertions(+), 1 deletion(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h index ae586c1..4cff3b7 100644 --- a/arch/arm/include/asm/kvm_arm.h +++ b/arch/arm/include/asm/kvm_arm.h @@ -158,11 +158,20 @@ #define HSR_ISS (HSR_IL - 1) #define HSR_ISV_SHIFT (24) #define HSR_ISV (1U << HSR_ISV_SHIFT) +#define HSR_FSC (0x3f) +#define HSR_FSC_TYPE (0x3c) +#define HSR_WNR (1 << 6) #define HSR_CV_SHIFT (24) #define HSR_CV (1U << HSR_CV_SHIFT) #define HSR_COND_SHIFT (20) #define HSR_COND (0xfU << HSR_COND_SHIFT) +#define FSC_FAULT (0x04) +#define FSC_PERM (0x0c) + +/* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */ +#define HPFAR_MASK (~0xf) + #define HSR_EC_UNKNOWN (0x00) #define HSR_EC_WFI (0x01) #define HSR_EC_CP15_32 (0x03) diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 55b6446..85bd676 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -48,6 +48,8 @@ extern char __kvm_hyp_vector[]; extern char __kvm_hyp_code_start[]; extern char __kvm_hyp_code_end[]; +extern void __kvm_tlb_flush_vmid(struct kvm *kvm); + extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 6cb0e38..448fbd6 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -25,6 +25,7 @@ #include #include #include +#include static DEFINE_MUTEX(kvm_hyp_pgd_mutex); @@ -491,9 +492,108 @@ out: return ret; } +static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + gfn_t gfn, struct kvm_memory_slot *memslot, + bool is_iabt) +{ + pte_t new_pte; + pfn_t pfn; + int ret; + bool write_fault, writable; + struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + + /* TODO: Use instr. decoding for non-ISV to determine r/w fault */ + if (is_iabt) + write_fault = false; + else if ((vcpu->arch.hsr & HSR_ISV) && !(vcpu->arch.hsr & HSR_WNR)) + write_fault = false; + else + write_fault = true; + + if ((vcpu->arch.hsr & HSR_FSC_TYPE) == FSC_PERM && !write_fault) { + kvm_err("Unexpected L2 read permission error\n"); + return -EFAULT; + } + + pfn = gfn_to_pfn_prot(vcpu->kvm, gfn, write_fault, &writable); + + if (is_error_pfn(pfn)) { + put_page(pfn_to_page(pfn)); + kvm_err("No host mapping: gfn %u (0x%08x)\n", + (unsigned int)gfn, + (unsigned int)gfn << PAGE_SHIFT); + return -EFAULT; + } + + /* We need minimum second+third level pages */ + ret = mmu_topup_memory_cache(memcache, 2, KVM_NR_MEM_OBJS); + if (ret) + return ret; + new_pte = pfn_pte(pfn, PAGE_KVM_GUEST); + if (writable) + new_pte |= L_PTE2_WRITE; + spin_lock(&vcpu->kvm->arch.pgd_lock); + stage2_set_pte(vcpu->kvm, memcache, fault_ipa, &new_pte); + spin_unlock(&vcpu->kvm->arch.pgd_lock); + + return ret; +} + +/** + * kvm_handle_guest_abort - handles all 2nd stage aborts + * @vcpu: the VCPU pointer + * @run: the kvm_run structure + * + * Any abort that gets to the host is almost guaranteed to be caused by a + * missing second stage translation table entry, which can mean that either the + * guest simply needs more memory and we must allocate an appropriate page or it + * can mean that the guest tried to access I/O memory, which is emulated by user + * space. The distinction is based on the IPA causing the fault and whether this + * memory region has been registered as standard RAM by user space. + */ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) { - return -EINVAL; + unsigned long hsr_ec; + unsigned long fault_status; + phys_addr_t fault_ipa; + struct kvm_memory_slot *memslot = NULL; + bool is_iabt; + gfn_t gfn; + int ret; + + hsr_ec = vcpu->arch.hsr >> HSR_EC_SHIFT; + is_iabt = (hsr_ec == HSR_EC_IABT); + + /* Check that the second stage fault is a translation fault */ + fault_status = (vcpu->arch.hsr & HSR_FSC_TYPE); + if (fault_status != FSC_FAULT && fault_status != FSC_PERM) { + kvm_err("Unsupported fault status: EC=%#lx DFCS=%#lx\n", + hsr_ec, fault_status); + return -EFAULT; + } + + fault_ipa = ((phys_addr_t)vcpu->arch.hpfar & HPFAR_MASK) << 8; + + gfn = fault_ipa >> PAGE_SHIFT; + if (!kvm_is_visible_gfn(vcpu->kvm, gfn)) { + if (is_iabt) { + kvm_err("Inst. abort on I/O address %08lx\n", + (unsigned long)fault_ipa); + return -EFAULT; + } + + kvm_pr_unimpl("I/O address abort..."); + return 0; + } + + memslot = gfn_to_memslot(vcpu->kvm, gfn); + if (!memslot->user_alloc) { + kvm_err("non user-alloc memslots not supported\n"); + return -EINVAL; + } + + ret = user_mem_abort(vcpu, fault_ipa, gfn, memslot, is_iabt); + return ret ? ret : 1; } static bool hva_to_gpa(struct kvm *kvm, unsigned long hva, gpa_t *gpa)