From patchwork Thu Jul 1 13:53:48 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 109122 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o61DwUML029674 for ; Thu, 1 Jul 2010 13:58:31 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756053Ab0GAN5i (ORCPT ); Thu, 1 Jul 2010 09:57:38 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:64583 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1755912Ab0GAN5h (ORCPT ); Thu, 1 Jul 2010 09:57:37 -0400 Received: from tang.cn.fujitsu.com (tang.cn.fujitsu.com [10.167.250.3]) by song.cn.fujitsu.com (Postfix) with ESMTP id 178B2170115; Thu, 1 Jul 2010 21:57:36 +0800 (CST) Received: from fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1]) by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id o61Dt06s019484; Thu, 1 Jul 2010 21:55:00 +0800 Received: from [10.167.141.99] (unknown [10.167.141.99]) by fnst.cn.fujitsu.com (Postfix) with ESMTPA id 561B510C165; Thu, 1 Jul 2010 21:57:42 +0800 (CST) Message-ID: <4C2C9DEC.4040008@cn.fujitsu.com> Date: Thu, 01 Jul 2010 21:53:48 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , LKML , KVM list Subject: [PATCH v4 2/6] KVM: MMU: introduce gfn_to_page_many_atomic() function References: <4C2C9DC0.8050607@cn.fujitsu.com> In-Reply-To: <4C2C9DC0.8050607@cn.fujitsu.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Thu, 01 Jul 2010 13:58:32 +0000 (UTC) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index e0fb543..53f663c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -288,6 +288,8 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, void kvm_disable_largepages(void); void kvm_arch_flush_shadow(struct kvm *kvm); +int gfn_to_page_many_atomic(struct kvm *kvm, gfn_t gfn, + struct page **pages, int nr_pages, bool *enough); struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn); unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn); void kvm_release_page_clean(struct page *page); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3f976b0..cc360d7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -923,15 +923,25 @@ static unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn) return slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE; } -unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn) +static unsigned long gfn_to_hva_many(struct kvm *kvm, gfn_t gfn, int *entry) { struct kvm_memory_slot *slot; slot = gfn_to_memslot(kvm, gfn); + if (!slot || slot->flags & KVM_MEMSLOT_INVALID) return bad_hva(); + + if (entry) + *entry = slot->npages - (gfn - slot->base_gfn); + return gfn_to_hva_memslot(slot, gfn); } + +unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn) +{ + return gfn_to_hva_many(kvm, gfn, NULL); +} EXPORT_SYMBOL_GPL(gfn_to_hva); static pfn_t hva_to_pfn(struct kvm *kvm, unsigned long addr, bool atomic) @@ -1011,6 +1021,23 @@ pfn_t gfn_to_pfn_memslot(struct kvm *kvm, return hva_to_pfn(kvm, addr, false); } +int gfn_to_page_many_atomic(struct kvm *kvm, gfn_t gfn, + struct page **pages, int nr_pages, bool *enough) +{ + unsigned long addr; + int entry, ret; + + addr = gfn_to_hva_many(kvm, gfn, &entry); + if (kvm_is_error_hva(addr)) + return -1; + + entry = min(entry, nr_pages); + *enough = (entry == nr_pages) ? true : false; + ret = __get_user_pages_fast(addr, entry, 1, pages); + return ret; +} +EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic); + struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn) { pfn_t pfn;