From patchwork Sun May 19 04:52:31 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nakajima, Jun" X-Patchwork-Id: 2589741 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 881ADDF24C for ; Sun, 19 May 2013 04:53:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752171Ab3ESExI (ORCPT ); Sun, 19 May 2013 00:53:08 -0400 Received: from mail-da0-f45.google.com ([209.85.210.45]:33761 "EHLO mail-da0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751371Ab3ESExG (ORCPT ); Sun, 19 May 2013 00:53:06 -0400 Received: by mail-da0-f45.google.com with SMTP id w3so3197749dad.32 for ; Sat, 18 May 2013 21:53:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=rdy/qVBpX+Aw4py1W4jRiYAtP8KylS34EKfjiOi9798=; b=BtjZzTbc7jh8crMwvZawQEbo1FAJXgFhOqQ7ZltvHIa8RkkA2n/ZIZS28Ozrh7g/pb uxNrWBtlh+i+7GzdmfONHLuMJ3RRt2WSSzpVZduyQoRbzdokmXZKEcu2dTdrzJ4SIHyW A0UhkNBnE00ur85FmQg5pya8yLgm4MPdEG59tiv2ZyamX2FWbBE46U1ZcnlUeBLLQ3D9 JSy3sFyUlVl4I06dotnBAoeJP93Pp9gCKuQ7+59BTGkp61LWLUjzNNrYGyoQsRL7uGmz /40gfTgXZ4lMuVGncMxKUJwPw5ugIcOeQ7n1bfvkddW0q4AIpM/TIo9bx/wW4UEMHntq kVag== X-Received: by 10.68.129.197 with SMTP id ny5mr53738553pbb.180.1368939186031; Sat, 18 May 2013 21:53:06 -0700 (PDT) Received: from localhost (c-98-207-34-191.hsd1.ca.comcast.net. [98.207.34.191]) by mx.google.com with ESMTPSA id uq10sm18235421pbc.5.2013.05.18.21.53.04 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Sat, 18 May 2013 21:53:05 -0700 (PDT) From: Jun Nakajima To: kvm@vger.kernel.org Cc: Gleb Natapov , Paolo Bonzini Subject: [PATCH v3 12/13] nEPT: Move is_rsvd_bits_set() to paging_tmpl.h Date: Sat, 18 May 2013 21:52:31 -0700 Message-Id: <1368939152-11406-12-git-send-email-jun.nakajima@intel.com> X-Mailer: git-send-email 1.8.2.1.610.g562af5b In-Reply-To: <1368939152-11406-1-git-send-email-jun.nakajima@intel.com> References: <1368939152-11406-1-git-send-email-jun.nakajima@intel.com> X-Gm-Message-State: ALoCoQkXCL3ogLoWmYNjwXl7Iwrf7n3VuP70JlBUgXSXO1brcFiH8nsMJQ8EBT3WEV+EchJwOmTu Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move is_rsvd_bits_set() to paging_tmpl.h so that it can be used to check reserved bits in EPT page table entries as well. Signed-off-by: Jun Nakajima Signed-off-by: Xinhao Xu --- arch/x86/kvm/mmu.c | 8 -------- arch/x86/kvm/paging_tmpl.h | 12 ++++++++++-- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 37f8d7f..93d6abf 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2468,14 +2468,6 @@ static void nonpaging_new_cr3(struct kvm_vcpu *vcpu) mmu_free_roots(vcpu); } -static bool is_rsvd_bits_set(struct kvm_mmu *mmu, u64 gpte, int level) -{ - int bit7; - - bit7 = (gpte >> 7) & 1; - return (gpte & mmu->rsvd_bits_mask[bit7][level-1]) != 0; -} - static pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, bool no_dirty_log) { diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index dc495f9..2432d49 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -124,11 +124,19 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, } #endif +static bool FNAME(is_rsvd_bits_set)(struct kvm_mmu *mmu, u64 gpte, int level) +{ + int bit7; + + bit7 = (gpte >> 7) & 1; + return (gpte & mmu->rsvd_bits_mask[bit7][level-1]) != 0; +} + static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, u64 *spte, u64 gpte) { - if (is_rsvd_bits_set(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL)) + if (FNAME(is_rsvd_bits_set)(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL)) goto no_present; if (!is_present_gpte(gpte)) @@ -279,7 +287,7 @@ retry_walk: if (unlikely(!is_present_gpte(pte))) goto error; - if (unlikely(is_rsvd_bits_set(&vcpu->arch.mmu, pte, + if (unlikely(FNAME(is_rsvd_bits_set)(&vcpu->arch.mmu, pte, walker->level))) { errcode |= PFERR_RSVD_MASK | PFERR_PRESENT_MASK; goto error;