From patchwork Thu Apr 21 15:34:44 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 725101 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p3LFYsxG022824 for ; Thu, 21 Apr 2011 15:34:55 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752813Ab1DUPeu (ORCPT ); Thu, 21 Apr 2011 11:34:50 -0400 Received: from mail-pv0-f174.google.com ([74.125.83.174]:56440 "EHLO mail-pv0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750915Ab1DUPet (ORCPT ); Thu, 21 Apr 2011 11:34:49 -0400 Received: by pvg12 with SMTP id 12so909250pvg.19 for ; Thu, 21 Apr 2011 08:34:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:date:from:to:cc:subject:message-id:in-reply-to :references:x-mailer:mime-version:content-type :content-transfer-encoding; bh=dOv2WVVgD26FiELaysl+EGccTIuz1Yre0muuh2hDLmU=; b=J8yG9VWPKmL67nhoc6mrB7AT6AI21hTuzkEg3SunpgSMI/I7F/oyRzaNfElN6ZwZ51 twuPG4Tu6qLzlIU+zPk+IcaMCkAkkmXaLAcXYFc4l20IXX1ZAq46F1Rrw7/MeuT5VkTD D4wJzhhJ+M7ri1QfP2VAMWpIU6PVAmjv9A8BY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:in-reply-to:references:x-mailer :mime-version:content-type:content-transfer-encoding; b=KIIQpVwSYxMMn7DTnuCD4xfXvDHC3HMACR5m2Gz60wdIVePF/NHCNexF7ToKttJ+BG 4TA6IdA3ei0X4Eky0Tvv5xpzRmBpiztHv9x95yhr6/R/QpZ69RCimpqN7EAOzyG9zoCV YgLXjnYwXDpJBUvkUIbZY3qTzxUcypJ4v6uxI= Received: by 10.68.20.163 with SMTP id o3mr100823pbe.375.1303400089427; Thu, 21 Apr 2011 08:34:49 -0700 (PDT) Received: from amd (s198099.dynamic.ppp.asahi-net.or.jp [220.157.198.99]) by mx.google.com with ESMTPS id t9sm1380241pbo.3.2011.04.21.08.34.46 (version=SSLv3 cipher=OTHER); Thu, 21 Apr 2011 08:34:48 -0700 (PDT) Date: Fri, 22 Apr 2011 00:34:44 +0900 From: Takuya Yoshikawa To: avi@redhat.com, mtosatti@redhat.com Cc: kvm@vger.kernel.org, yoshikawa.takuya@oss.ntt.co.jp, xiaoguangrong@cn.fujitsu.com, Joerg.Roedel@amd.com Subject: [PATCH 1/1 v2] KVM: MMU: Optimize guest page table walk Message-Id: <20110422003444.5b3a876a.takuya.yoshikawa@gmail.com> In-Reply-To: <20110422003222.9d08aee3.takuya.yoshikawa@gmail.com> References: <20110422003222.9d08aee3.takuya.yoshikawa@gmail.com> X-Mailer: Sylpheed 3.1.0beta2 (GTK+ 2.22.0; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Thu, 21 Apr 2011 15:35:08 +0000 (UTC) From: Takuya Yoshikawa This patch optimizes the guest page table walk by using get_user() instead of copy_from_user(). With this patch applied, paging64_walk_addr_generic() has become about 0.5us to 1.0us faster on my Phenom II machine with NPT on. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/paging_tmpl.h | 23 ++++++++++++++++++++--- 1 files changed, 20 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 74f8567..825d953 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -117,6 +117,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, gva_t addr, u32 access) { pt_element_t pte; + pt_element_t __user *ptep_user; gfn_t table_gfn; unsigned index, pt_access, uninitialized_var(pte_access); gpa_t pte_gpa; @@ -152,6 +153,9 @@ walk: pt_access = ACC_ALL; for (;;) { + gfn_t real_gfn; + unsigned long host_addr; + index = PT_INDEX(addr, walker->level); table_gfn = gpte_to_gfn(pte); @@ -160,9 +164,22 @@ walk: walker->table_gfn[walker->level - 1] = table_gfn; walker->pte_gpa[walker->level - 1] = pte_gpa; - if (kvm_read_guest_page_mmu(vcpu, mmu, table_gfn, &pte, - offset, sizeof(pte), - PFERR_USER_MASK|PFERR_WRITE_MASK)) { + real_gfn = mmu->translate_gpa(vcpu, gfn_to_gpa(table_gfn), + PFERR_USER_MASK|PFERR_WRITE_MASK); + if (real_gfn == UNMAPPED_GVA) { + present = false; + break; + } + real_gfn = gpa_to_gfn(real_gfn); + + host_addr = gfn_to_hva(vcpu->kvm, real_gfn); + if (kvm_is_error_hva(host_addr)) { + present = false; + break; + } + + ptep_user = (pt_element_t __user *)((void *)host_addr + offset); + if (get_user(pte, ptep_user)) { present = false; break; }