From patchwork Wed Mar 3 19:12:18 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 83424 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o23JDiBZ008230 for ; Wed, 3 Mar 2010 19:13:45 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755008Ab0CCTN1 (ORCPT ); Wed, 3 Mar 2010 14:13:27 -0500 Received: from tx2ehsobe001.messaging.microsoft.com ([65.55.88.11]:26994 "EHLO TX2EHSOBE002.bigfish.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754803Ab0CCTNU (ORCPT ); Wed, 3 Mar 2010 14:13:20 -0500 Received: from mail161-tx2-R.bigfish.com (10.9.14.241) by TX2EHSOBE002.bigfish.com (10.9.40.22) with Microsoft SMTP Server id 8.1.240.5; Wed, 3 Mar 2010 19:13:18 +0000 Received: from mail161-tx2 (localhost [127.0.0.1]) by mail161-tx2-R.bigfish.com (Postfix) with ESMTP id 7B8AC7A809B; Wed, 3 Mar 2010 19:13:16 +0000 (UTC) X-SpamScore: -4 X-BigFish: VPS-4(zz936eMab9bhzz1202hzzz32i6bh2a8h87h64h) X-Spam-TCS-SCL: 3:0 X-FB-DOMAIN-IP-MATCH: fail Received: from mail161-tx2 (localhost.localdomain [127.0.0.1]) by mail161-tx2 (MessageSwitch) id 1267643580477649_20395; Wed, 3 Mar 2010 19:13:00 +0000 (UTC) Received: from TX2EHSMHS040.bigfish.com (unknown [10.9.14.235]) by mail161-tx2.bigfish.com (Postfix) with ESMTP id A4CEDAF004F; Wed, 3 Mar 2010 19:12:59 +0000 (UTC) Received: from ausb3extmailp02.amd.com (163.181.251.22) by TX2EHSMHS040.bigfish.com (10.9.99.140) with Microsoft SMTP Server (TLS) id 14.0.482.39; Wed, 3 Mar 2010 19:12:57 +0000 Received: from ausb3twp02.amd.com ([163.181.250.38]) by ausb3extmailp02.amd.com (Switch-3.2.7/Switch-3.2.7) with SMTP id o23JGHZL003854; Wed, 3 Mar 2010 13:16:20 -0600 X-WSS-ID: 0KYQ01B-02-H5F-02 X-M-MSG: Received: from sausexbh1.amd.com (sausexbh1.amd.com [163.181.22.101]) by ausb3twp02.amd.com (Tumbleweed MailGate 3.7.2) with ESMTP id 28F66C863F; Wed, 3 Mar 2010 13:12:46 -0600 (CST) Received: from sausexmb1.amd.com ([163.181.3.156]) by sausexbh1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Wed, 3 Mar 2010 13:12:51 -0600 Received: from seurexmb1.amd.com ([165.204.9.130]) by sausexmb1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Wed, 3 Mar 2010 13:12:51 -0600 Received: from lemmy.osrc.amd.com ([165.204.15.93]) by seurexmb1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Wed, 3 Mar 2010 20:12:42 +0100 Received: by lemmy.osrc.amd.com (Postfix, from userid 41430) id D7F9DC9C0A; Wed, 3 Mar 2010 20:12:42 +0100 (CET) From: Joerg Roedel To: Avi Kivity , Marcelo Tosatti CC: Alexander Graf , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Joerg Roedel Subject: [PATCH 15/18] KVM: MMU: Propagate the right fault back to the guest after gva_to_gpa Date: Wed, 3 Mar 2010 20:12:18 +0100 Message-ID: <1267643541-451-16-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.7.0 In-Reply-To: <1267643541-451-1-git-send-email-joerg.roedel@amd.com> References: <1267643541-451-1-git-send-email-joerg.roedel@amd.com> X-OriginalArrivalTime: 03 Mar 2010 19:12:42.0813 (UTC) FILETIME=[7FE07ED0:01CABB05] MIME-Version: 1.0 X-Reverse-DNS: ausb3extmailp02.amd.com Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Wed, 03 Mar 2010 19:13:45 +0000 (UTC) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 64f619b..b42b27e 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -47,6 +47,7 @@ #define PFERR_USER_MASK (1U << 2) #define PFERR_RSVD_MASK (1U << 3) #define PFERR_FETCH_MASK (1U << 4) +#define PFERR_NESTED_MASK (1U << 31) int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]); int kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context); diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index c0158d8..9fc5fb1 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -154,6 +154,7 @@ walk: pte_gpa = mmu->translate_gpa(vcpu, pte_gpa, &error); if (pte_gpa == UNMAPPED_GVA) { + error |= PFERR_NESTED_MASK; walker->error_code = error; return 0; } @@ -223,6 +224,7 @@ walk: pte_gpa = gfn_to_gpa(walker->gfn); pte_gpa = mmu->translate_gpa(vcpu, pte_gpa, &error); if (pte_gpa == UNMAPPED_GVA) { + error |= PFERR_NESTED_MASK; walker->error_code = error; return 0; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2883ce8..9f8b02d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -314,6 +314,19 @@ void kvm_inject_page_fault(struct kvm_vcpu *vcpu, unsigned long addr, kvm_queue_exception_e(vcpu, PF_VECTOR, error_code); } +void kvm_propagate_fault(struct kvm_vcpu *vcpu, unsigned long addr, u32 error_code) +{ + u32 nested, error; + + nested = error_code & PFERR_NESTED_MASK; + error = error_code & ~PFERR_NESTED_MASK; + + if (vcpu->arch.mmu.nested && !(error_code && PFERR_NESTED_MASK)) + vcpu->arch.nested_mmu.inject_page_fault(vcpu, addr, error); + else + vcpu->arch.mmu.inject_page_fault(vcpu, addr, error); +} + void kvm_inject_nmi(struct kvm_vcpu *vcpu) { vcpu->arch.nmi_pending = 1; @@ -3546,7 +3559,7 @@ static int pio_copy_data(struct kvm_vcpu *vcpu) ret = kvm_read_guest_virt(q, p, bytes, vcpu, &error_code); if (ret == X86EMUL_PROPAGATE_FAULT) - kvm_inject_page_fault(vcpu, q, error_code); + kvm_propagate_fault(vcpu, q, error_code); return ret; }