From patchwork Fri Dec 27 19:43:37 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 3412371 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 11A2B9F2A9 for ; Fri, 27 Dec 2013 19:44:16 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 866D52014A for ; Fri, 27 Dec 2013 19:44:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1CB6F20144 for ; Fri, 27 Dec 2013 19:44:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754652Ab3L0ToD (ORCPT ); Fri, 27 Dec 2013 14:44:03 -0500 Received: from mx1.redhat.com ([209.132.183.28]:1417 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754598Ab3L0ToB (ORCPT ); Fri, 27 Dec 2013 14:44:01 -0500 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id rBRJhthS021412 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 27 Dec 2013 14:43:55 -0500 Received: from amt.cnet (vpn1-4-131.gru2.redhat.com [10.97.4.131]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id rBRJhsSx001571; Fri, 27 Dec 2013 14:43:54 -0500 Received: from amt.cnet (localhost [127.0.0.1]) by amt.cnet (Postfix) with ESMTP id 361E2104429; Fri, 27 Dec 2013 17:43:41 -0200 (BRST) Received: (from marcelo@localhost) by amt.cnet (8.14.6/8.14.6/Submit) id rBRJhcNk026807; Fri, 27 Dec 2013 17:43:38 -0200 Date: Fri, 27 Dec 2013 17:43:37 -0200 From: Marcelo Tosatti To: Rom Freiman Cc: Muli Ben-Yehuda , kvm@vger.kernel.org, pbonzini@redhat.com, xiaoguangrong@linux.vnet.ibm.com, Benoit Hudzia , Abel Gordon , Dan Aloni Subject: Re: KVM: MMU: handle invalid root_hpa at __direct_map Message-ID: <20131227194337.GA26796@amt.cnet> References: <20131222145649.GA6611@amt.cnet> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20131222145649.GA6611@amt.cnet> User-Agent: Mutt/1.5.21 (2010-09-15) X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Sun, Dec 22, 2013 at 12:56:49PM -0200, Marcelo Tosatti wrote: > On Sun, Dec 22, 2013 at 11:17:21AM +0200, Rom Freiman wrote: > > Hello everyone, > > > > I've been chasing this bug for a while. > > > > According to my research, this bug fix is works fine for > > 3.11.9-200.fc19.x86_64 kernel version (and I also came to almost similar > > solution and really solved the crash). > > > > But, the problem is, that it seems that this patch does not work on 3.13.0-rc2+ > > - it looks like the code flow is different and it crashes in ept_page_fault > > and does not reach __direct_map: > > Yep, similar problem, care to send a patch against > > FNAME(page_fault), kvm_mmu_get_spte_hierarchy > > Maybe there are more vulnerable sites, should secure them all. These should cover all it? --- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 31a5702..e50425d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2832,6 +2832,9 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level, bool ret = false; u64 spte = 0ull; + if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) + return false; + if (!page_fault_can_be_fast(error_code)) return false; @@ -3227,6 +3230,9 @@ static u64 walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr) struct kvm_shadow_walk_iterator iterator; u64 spte = 0ull; + if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) + return spte; + walk_shadow_page_lockless_begin(vcpu); for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) if (!is_shadow_present_pte(spte)) @@ -4513,6 +4519,9 @@ int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]) u64 spte; int nr_sptes = 0; + if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) + return nr_sptes; + walk_shadow_page_lockless_begin(vcpu); for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) { sptes[iterator.level-1] = spte; diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index ad75d77..cba218a 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -569,6 +569,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, if (FNAME(gpte_changed)(vcpu, gw, top_level)) goto out_gpte_changed; + if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) + goto out_gpte_changed; + for (shadow_walk_init(&it, vcpu, addr); shadow_walk_okay(&it) && it.level > gw->level; shadow_walk_next(&it)) { @@ -820,6 +823,11 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva) */ mmu_topup_memory_caches(vcpu); + if (!VALID_PAGE(vcpu->arch.mmu.root_hpa)) { + WARN_ON(1); + return; + } + spin_lock(&vcpu->kvm->mmu_lock); for_each_shadow_entry(vcpu, gva, iterator) { level = iterator.level;