From patchwork Mon May 6 07:04:31 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nakajima, Jun" X-Patchwork-Id: 2522941 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 71AFE3FD85 for ; Mon, 6 May 2013 07:05:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753060Ab3EFHFJ (ORCPT ); Mon, 6 May 2013 03:05:09 -0400 Received: from mail-pa0-f54.google.com ([209.85.220.54]:47806 "EHLO mail-pa0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753008Ab3EFHFH (ORCPT ); Mon, 6 May 2013 03:05:07 -0400 Received: by mail-pa0-f54.google.com with SMTP id kx1so1876051pab.41 for ; Mon, 06 May 2013 00:05:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=OsyFmHgXhdYkDP7PiC7gstJUJ6tr1OB/5ztdufo8nEI=; b=Z1gze9nfo70YN/yktEf6YEudVWQf5s6VGUdMBZ6uslo9WDaLNrgRW/2YmxpwDYvbzs TEFBIvasBsOiVk7FjbAgSJ4aNLT6lB58bBQcB9PHEMy7tCR9dGN7b/swWUPfjBKRNzm3 wXIu6uQ0M73XrqVovZVH8lmY2mNzb5bjJNHuugfcMNsSu6DA+abZuHlcWoJohR/PCSLK fNcVZaKWB+xD83SXU0BSfcXd8q2xlpERt3RsESJ4DDA8KzRojB43PEMOZlu6cL42Y7ul FTAhHnIQVHJAIYy0KITiMI1SubWMu+95iGsx1mPGRhlyYBPGQiS/m51AIxt1WF2WX86T 6oSQ== X-Received: by 10.66.159.6 with SMTP id wy6mr25268275pab.206.1367823906712; Mon, 06 May 2013 00:05:06 -0700 (PDT) Received: from localhost (c-98-207-34-191.hsd1.ca.comcast.net. [98.207.34.191]) by mx.google.com with ESMTPSA id al2sm22730592pbc.25.2013.05.06.00.05.04 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 06 May 2013 00:05:05 -0700 (PDT) From: Jun Nakajima To: kvm@vger.kernel.org Subject: [PATCH v2 12/13] nEPT: Move is_rsvd_bits_set() to paging_tmpl.h Date: Mon, 6 May 2013 00:04:31 -0700 Message-Id: <1367823872-25895-12-git-send-email-jun.nakajima@intel.com> X-Mailer: git-send-email 1.8.2.1.610.g562af5b In-Reply-To: <1367823872-25895-11-git-send-email-jun.nakajima@intel.com> References: <1367823872-25895-1-git-send-email-jun.nakajima@intel.com> <1367823872-25895-2-git-send-email-jun.nakajima@intel.com> <1367823872-25895-3-git-send-email-jun.nakajima@intel.com> <1367823872-25895-4-git-send-email-jun.nakajima@intel.com> <1367823872-25895-5-git-send-email-jun.nakajima@intel.com> <1367823872-25895-6-git-send-email-jun.nakajima@intel.com> <1367823872-25895-7-git-send-email-jun.nakajima@intel.com> <1367823872-25895-8-git-send-email-jun.nakajima@intel.com> <1367823872-25895-9-git-send-email-jun.nakajima@intel.com> <1367823872-25895-10-git-send-email-jun.nakajima@intel.com> <1367823872-25895-11-git-send-email-jun.nakajima@intel.com> X-Gm-Message-State: ALoCoQk2dkgf3ba/VG0dC9b+SJ78j543elh2bNxlgTH/65Pp+towp27EIvrg6RrAJh7ZVmZC+UeU Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move is_rsvd_bits_set() to paging_tmpl.h so that it can be used to check reserved bits in EPT page table entries as well. Signed-off-by: Jun Nakajima Signed-off-by: Xinhao Xu --- arch/x86/kvm/mmu.c | 8 -------- arch/x86/kvm/paging_tmpl.h | 12 ++++++++++-- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 99bfc5e..054c68b 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2460,14 +2460,6 @@ static void nonpaging_new_cr3(struct kvm_vcpu *vcpu) mmu_free_roots(vcpu); } -static bool is_rsvd_bits_set(struct kvm_mmu *mmu, u64 gpte, int level) -{ - int bit7; - - bit7 = (gpte >> 7) & 1; - return (gpte & mmu->rsvd_bits_mask[bit7][level-1]) != 0; -} - static pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, bool no_dirty_log) { diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 51dca23..777d5d7 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -124,11 +124,19 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, } #endif +static bool FNAME(is_rsvd_bits_set)(struct kvm_mmu *mmu, u64 gpte, int level) +{ + int bit7; + + bit7 = (gpte >> 7) & 1; + return (gpte & mmu->rsvd_bits_mask[bit7][level-1]) != 0; +} + static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, u64 *spte, u64 gpte) { - if (is_rsvd_bits_set(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL)) + if (FNAME(is_rsvd_bits_set)(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL)) goto no_present; if (!is_present_gpte(gpte)) @@ -279,7 +287,7 @@ retry_walk: if (unlikely(!is_present_gpte(pte))) goto error; - if (unlikely(is_rsvd_bits_set(&vcpu->arch.mmu, pte, + if (unlikely(FNAME(is_rsvd_bits_set)(&vcpu->arch.mmu, pte, walker->level))) { errcode |= PFERR_RSVD_MASK | PFERR_PRESENT_MASK; goto error;