From patchwork Tue May 6 00:40:59 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bandan Das X-Patchwork-Id: 4118451 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A7C0E9F1E1 for ; Tue, 6 May 2014 00:42:13 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DD2C1201EC for ; Tue, 6 May 2014 00:42:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id F214F201DE for ; Tue, 6 May 2014 00:42:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933573AbaEFAlv (ORCPT ); Mon, 5 May 2014 20:41:51 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36670 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933507AbaEFAlQ (ORCPT ); Mon, 5 May 2014 20:41:16 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s460fFNi002131 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 5 May 2014 20:41:15 -0400 Received: from nelium.bos.redhat.com ([10.18.25.173]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s460f8xf013853; Mon, 5 May 2014 20:41:14 -0400 From: Bandan Das To: kvm@vger.kernel.org Cc: Paolo Bonzini , Marcelo Tosatti , Gleb Natapov , linux-kernel@vger.kernel.org Subject: [RFC PATCH 3/3] KVM: x86: cache userspace address for faster fetches Date: Mon, 5 May 2014 20:40:59 -0400 Message-Id: <1399336859-7227-4-git-send-email-bsd@redhat.com> In-Reply-To: <1399336859-7227-1-git-send-email-bsd@redhat.com> References: <1399336859-7227-1-git-send-email-bsd@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On every instruction fetch, kvm_read_guest_virt_helper does the gva to gpa translation followed by searching for the memslot. Store the gva hva mapping so that if there's a match we can directly call __copy_from_user() Suggested-by: Paolo Bonzini Signed-off-by: Bandan Das --- arch/x86/include/asm/kvm_emulate.h | 7 ++++++- arch/x86/kvm/x86.c | 33 +++++++++++++++++++++++---------- 2 files changed, 29 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/kvm_emulate.h b/arch/x86/include/asm/kvm_emulate.h index 085d688..20ccde4 100644 --- a/arch/x86/include/asm/kvm_emulate.h +++ b/arch/x86/include/asm/kvm_emulate.h @@ -323,10 +323,11 @@ struct x86_emulate_ctxt { int (*execute)(struct x86_emulate_ctxt *ctxt); int (*check_perm)(struct x86_emulate_ctxt *ctxt); /* - * The following five fields are cleared together, + * The following six fields are cleared together, * the rest are initialized unconditionally in x86_decode_insn * or elsewhere */ + bool addr_cache_valid; u8 rex_prefix; u8 lock_prefix; u8 rep_prefix; @@ -348,6 +349,10 @@ struct x86_emulate_ctxt { struct fetch_cache fetch; struct read_cache io_read; struct read_cache mem_read; + struct { + gfn_t gfn; + unsigned long uaddr; + } addr_cache; }; /* Repeat String Operation Prefix */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index cf69e3b..7afcfc7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4072,26 +4072,38 @@ static int kvm_read_guest_virt_helper(gva_t addr, void *val, unsigned int bytes, unsigned toread = min(bytes, (unsigned)PAGE_SIZE - offset); int ret; unsigned long uaddr; + gfn_t gfn = addr >> PAGE_SHIFT; - ret = ctxt->ops->memory_prepare(ctxt, addr, toread, - exception, false, - NULL, &uaddr); - if (ret != X86EMUL_CONTINUE) - return ret; + if (ctxt->addr_cache_valid && + (ctxt->addr_cache.gfn == gfn)) + uaddr = (ctxt->addr_cache.uaddr << PAGE_SHIFT) + + offset_in_page(addr); + else { + ret = ctxt->ops->memory_prepare(ctxt, addr, toread, + exception, false, + NULL, &uaddr); + if (ret != X86EMUL_CONTINUE) + return ret; + + if (unlikely(kvm_is_error_hva(uaddr))) { + r = X86EMUL_PROPAGATE_FAULT; + return r; + } - if (unlikely(kvm_is_error_hva(uaddr))) { - r = X86EMUL_PROPAGATE_FAULT; - return r; + /* Cache gfn and hva */ + ctxt->addr_cache.gfn = addr >> PAGE_SHIFT; + ctxt->addr_cache.uaddr = uaddr >> PAGE_SHIFT; + ctxt->addr_cache_valid = true; } ret = __copy_from_user(data, (void __user *)uaddr, toread); if (ret < 0) { r = X86EMUL_IO_NEEDED; + /* Where else should we invalidate cache ? */ + ctxt->ops->memory_finish(ctxt, NULL, uaddr); return r; } - ctxt->ops->memory_finish(ctxt, NULL, uaddr); - bytes -= toread; data += toread; addr += toread; @@ -4339,6 +4351,7 @@ static void emulator_memory_finish(struct x86_emulate_ctxt *ctxt, struct kvm_memory_slot *memslot; gfn_t gfn; + ctxt->addr_cache_valid = false; if (!opaque) return;