From patchwork Tue Apr 14 04:04:10 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 6213641 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 3F1629F1C4 for ; Tue, 14 Apr 2015 04:06:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7987520148 for ; Tue, 14 Apr 2015 04:06:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5B15020145 for ; Tue, 14 Apr 2015 04:06:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751386AbbDNEGv (ORCPT ); Tue, 14 Apr 2015 00:06:51 -0400 Received: from mga02.intel.com ([134.134.136.20]:51784 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751329AbbDNEGt (ORCPT ); Tue, 14 Apr 2015 00:06:49 -0400 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP; 13 Apr 2015 21:06:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,573,1422950400"; d="scan'208";a="480540657" Received: from xiao.sh.intel.com ([10.239.159.86]) by FMSMGA003.fm.intel.com with ESMTP; 13 Apr 2015 21:06:47 -0700 Message-ID: <552C91BA.1010703@linux.intel.com> Date: Tue, 14 Apr 2015 12:04:10 +0800 From: Xiao Guangrong User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: Andres Lagar-Cavilla CC: Wanpeng Li , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Paolo Bonzini , Eric Northup Subject: [PATCH] KVM: MMU: fix comment in kvm_mmu_zap_collapsible_spte References: <1428046825-6905-1-git-send-email-wanpeng.li@linux.intel.com> <552B1FC3.4070604@linux.intel.com> In-Reply-To: Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Soft mmu uses direct shadow page to fill guest large mapping with small pages if huge mamping is disallowed on host. So zapping direct shadow page works well both for soft mmu and hard mmu Fix the comment to reflect this truth Signed-off-by: Xiao Guangrong Reviewed-by: Wanpeng Li --- arch/x86/kvm/mmu.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 146f295..68c5487 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4481,9 +4481,11 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, pfn = spte_to_pfn(*sptep); /* - * Only EPT supported for now; otherwise, one would need to - * find out efficiently whether the guest page tables are - * also using huge pages. + * We can not do huge page mapping for the indirect shadow + * page (sp) found on the last rmap (level = 1 ) since + * indirect sp is synced with the page table in guest and + * indirect sp->level = 1 means the guest page table is + * using 4K page size mapping. */ if (sp->role.direct && !kvm_is_reserved_pfn(pfn) &&