From patchwork Wed Dec 23 11:25:48 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 7910841 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 41FB5BEEE5 for ; Wed, 23 Dec 2015 11:35:55 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5363920431 for ; Wed, 23 Dec 2015 11:35:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 44421203E6 for ; Wed, 23 Dec 2015 11:35:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756745AbbLWLfg (ORCPT ); Wed, 23 Dec 2015 06:35:36 -0500 Received: from mga01.intel.com ([192.55.52.88]:33760 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756637AbbLWLdD (ORCPT ); Wed, 23 Dec 2015 06:33:03 -0500 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP; 23 Dec 2015 03:33:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,468,1444719600"; d="scan'208";a="877401154" Received: from xiaoreal1.sh.intel.com (HELO xiaoreal1.sh.intel.com.sh.intel.com) ([10.239.48.79]) by orsmga002.jf.intel.com with ESMTP; 23 Dec 2015 03:33:01 -0800 From: Xiao Guangrong To: pbonzini@redhat.com Cc: gleb@kernel.org, mtosatti@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, kai.huang@linux.intel.com, jike.song@intel.com, Xiao Guangrong Subject: [PATCH v2 05/11] KVM: page track: introduce kvm_page_track_{add, remove}_page Date: Wed, 23 Dec 2015 19:25:48 +0800 Message-Id: <1450869954-30273-6-git-send-email-guangrong.xiao@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1450869954-30273-1-git-send-email-guangrong.xiao@linux.intel.com> References: <1450869954-30273-1-git-send-email-guangrong.xiao@linux.intel.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP These two functions are the user APIs: - kvm_page_track_add_page(): add the page to the tracking pool after that later specified access on that page will be tracked - kvm_page_track_remove_page(): remove the page from the tracking pool, the specified access on the page is not tracked after the last user is gone Both of these are called under the protection of kvm->srcu or kvm->slots_lock Signed-off-by: Xiao Guangrong --- arch/x86/include/asm/kvm_page_track.h | 13 ++++ arch/x86/kvm/page_track.c | 124 ++++++++++++++++++++++++++++++++++ 2 files changed, 137 insertions(+) diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index 55200406..c010124 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -10,4 +10,17 @@ void kvm_page_track_free_memslot(struct kvm_memory_slot *free, struct kvm_memory_slot *dont); int kvm_page_track_create_memslot(struct kvm_memory_slot *slot, unsigned long npages); + +void +kvm_slot_page_track_add_page_nolock(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn, + enum kvm_page_track_mode mode); +void kvm_page_track_add_page(struct kvm *kvm, gfn_t gfn, + enum kvm_page_track_mode mode); +void kvm_slot_page_track_remove_page_nolock(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, + enum kvm_page_track_mode mode); +void kvm_page_track_remove_page(struct kvm *kvm, gfn_t gfn, + enum kvm_page_track_mode mode); #endif diff --git a/arch/x86/kvm/page_track.c b/arch/x86/kvm/page_track.c index 8c396d0..e17efe9 100644 --- a/arch/x86/kvm/page_track.c +++ b/arch/x86/kvm/page_track.c @@ -50,3 +50,127 @@ track_free: kvm_page_track_free_memslot(slot, NULL); return -ENOMEM; } + +static bool check_mode(enum kvm_page_track_mode mode) +{ + if (mode < 0 || mode >= KVM_PAGE_TRACK_MAX) + return false; + + return true; +} + +static void update_gfn_track(struct kvm_memory_slot *slot, gfn_t gfn, + enum kvm_page_track_mode mode, short count) +{ + int index; + unsigned short val; + + index = gfn_to_index(gfn, slot->base_gfn, PT_PAGE_TABLE_LEVEL); + + val = slot->arch.gfn_track[mode][index]; + + /* does tracking count wrap? */ + WARN_ON((count > 0) && (val + count < val)); + /* the last tracker has already gone? */ + WARN_ON((count < 0) && (val < !count)); + + slot->arch.gfn_track[mode][index] += count; +} + +void +kvm_slot_page_track_add_page_nolock(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn, + enum kvm_page_track_mode mode) +{ + + WARN_ON(!check_mode(mode)); + + update_gfn_track(slot, gfn, mode, 1); + + /* + * new track stops large page mapping for the + * tracked page. + */ + kvm_mmu_gfn_disallow_lpage(slot, gfn); + + if (mode == KVM_PAGE_TRACK_WRITE) + if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn)) + kvm_flush_remote_tlbs(kvm); +} + +/* + * add guest page to the tracking pool so that corresponding access on that + * page will be intercepted. + * + * It should be called under the protection of kvm->srcu or kvm->slots_lock + * + * @kvm: the guest instance we are interested in. + * @gfn: the guest page. + * @mode: tracking mode, currently only write track is supported. + */ +void kvm_page_track_add_page(struct kvm *kvm, gfn_t gfn, + enum kvm_page_track_mode mode) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *slot; + int i; + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + + slot = __gfn_to_memslot(slots, gfn); + if (!slot) + continue; + + spin_lock(&kvm->mmu_lock); + kvm_slot_page_track_add_page_nolock(kvm, slot, gfn, mode); + spin_unlock(&kvm->mmu_lock); + } +} + +void kvm_slot_page_track_remove_page_nolock(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, + enum kvm_page_track_mode mode) +{ + WARN_ON(!check_mode(mode)); + + update_gfn_track(slot, gfn, mode, -1); + + /* + * allow large page mapping for the tracked page + * after the tracker is gone. + */ + kvm_mmu_gfn_allow_lpage(slot, gfn); +} + +/* + * remove the guest page from the tracking pool which stops the interception + * of corresponding access on that page. It is the opposed operation of + * kvm_page_track_add_page(). + * + * It should be called under the protection of kvm->srcu or kvm->slots_lock + * + * @kvm: the guest instance we are interested in. + * @gfn: the guest page. + * @mode: tracking mode, currently only write track is supported. + */ +void kvm_page_track_remove_page(struct kvm *kvm, gfn_t gfn, + enum kvm_page_track_mode mode) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *slot; + int i; + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + + slot = __gfn_to_memslot(slots, gfn); + if (!slot) + continue; + + spin_lock(&kvm->mmu_lock); + kvm_slot_page_track_remove_page_nolock(kvm, slot, gfn, mode); + spin_unlock(&kvm->mmu_lock); + } +}