From patchwork Thu May 9 00:53:24 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nakajima, Jun" X-Patchwork-Id: 2542481 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 111393FC5A for ; Thu, 9 May 2013 00:54:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755367Ab3EIAx5 (ORCPT ); Wed, 8 May 2013 20:53:57 -0400 Received: from mail-pd0-f172.google.com ([209.85.192.172]:38226 "EHLO mail-pd0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755154Ab3EIAx5 (ORCPT ); Wed, 8 May 2013 20:53:57 -0400 Received: by mail-pd0-f172.google.com with SMTP id 6so1595922pdd.17 for ; Wed, 08 May 2013 17:53:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=rdy/qVBpX+Aw4py1W4jRiYAtP8KylS34EKfjiOi9798=; b=EozAh5Vq+9QkhQvDjmGoFbHnyFKgNhfuN5cplOPNTgqnPNCsILGgMaYraigVvoiCVA xMFhQqt9c47bcKL+EjkftlD0/fK8uLfR3Afh5zA6FgbtzcSanpDl9h/zlISz6uDQ+zp5 uc1QvjTDWosdZWgDhgDJ7HMXYk47OEsj3VYabLtRW+JYvrlItzT4cFobzc3Vd7HZmcjq Itw5f2j8OcDCktKvBwTySoepiCiCUYgccQoOUu/2Ns8yepT9lebiZksesy8Oib5qk6mr 73CZb7RdIfLIP8vFGA7GON89d1craqHZIy+tQJBnZl4t3OrJh5AdMY50GWTamzcRSUtp i2eg== X-Received: by 10.66.27.17 with SMTP id p17mr10469117pag.108.1368060836509; Wed, 08 May 2013 17:53:56 -0700 (PDT) Received: from localhost (c-98-207-34-191.hsd1.ca.comcast.net. [98.207.34.191]) by mx.google.com with ESMTPSA id vv6sm1244739pab.6.2013.05.08.17.53.54 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 08 May 2013 17:53:55 -0700 (PDT) From: Jun Nakajima To: kvm@vger.kernel.org Subject: [PATCH v3 12/13] nEPT: Move is_rsvd_bits_set() to paging_tmpl.h Date: Wed, 8 May 2013 17:53:24 -0700 Message-Id: <1368060805-2790-12-git-send-email-jun.nakajima@intel.com> X-Mailer: git-send-email 1.8.2.1.610.g562af5b In-Reply-To: <1368060805-2790-11-git-send-email-jun.nakajima@intel.com> References: <1368060805-2790-1-git-send-email-jun.nakajima@intel.com> <1368060805-2790-2-git-send-email-jun.nakajima@intel.com> <1368060805-2790-3-git-send-email-jun.nakajima@intel.com> <1368060805-2790-4-git-send-email-jun.nakajima@intel.com> <1368060805-2790-5-git-send-email-jun.nakajima@intel.com> <1368060805-2790-6-git-send-email-jun.nakajima@intel.com> <1368060805-2790-7-git-send-email-jun.nakajima@intel.com> <1368060805-2790-8-git-send-email-jun.nakajima@intel.com> <1368060805-2790-9-git-send-email-jun.nakajima@intel.com> <1368060805-2790-10-git-send-email-jun.nakajima@intel.com> <1368060805-2790-11-git-send-email-jun.nakajima@intel.com> X-Gm-Message-State: ALoCoQl74uM5M0w+HadyBbB/r8renKND4CTSJ6GoUyimkYl6pQUYMVaN2jgIpUMHE2tgnKmBJWfo Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move is_rsvd_bits_set() to paging_tmpl.h so that it can be used to check reserved bits in EPT page table entries as well. Signed-off-by: Jun Nakajima Signed-off-by: Xinhao Xu --- arch/x86/kvm/mmu.c | 8 -------- arch/x86/kvm/paging_tmpl.h | 12 ++++++++++-- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 37f8d7f..93d6abf 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2468,14 +2468,6 @@ static void nonpaging_new_cr3(struct kvm_vcpu *vcpu) mmu_free_roots(vcpu); } -static bool is_rsvd_bits_set(struct kvm_mmu *mmu, u64 gpte, int level) -{ - int bit7; - - bit7 = (gpte >> 7) & 1; - return (gpte & mmu->rsvd_bits_mask[bit7][level-1]) != 0; -} - static pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, bool no_dirty_log) { diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index dc495f9..2432d49 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -124,11 +124,19 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, } #endif +static bool FNAME(is_rsvd_bits_set)(struct kvm_mmu *mmu, u64 gpte, int level) +{ + int bit7; + + bit7 = (gpte >> 7) & 1; + return (gpte & mmu->rsvd_bits_mask[bit7][level-1]) != 0; +} + static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, u64 *spte, u64 gpte) { - if (is_rsvd_bits_set(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL)) + if (FNAME(is_rsvd_bits_set)(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL)) goto no_present; if (!is_present_gpte(gpte)) @@ -279,7 +287,7 @@ retry_walk: if (unlikely(!is_present_gpte(pte))) goto error; - if (unlikely(is_rsvd_bits_set(&vcpu->arch.mmu, pte, + if (unlikely(FNAME(is_rsvd_bits_set)(&vcpu->arch.mmu, pte, walker->level))) { errcode |= PFERR_RSVD_MASK | PFERR_PRESENT_MASK; goto error;