From patchwork Fri Oct 13 23:14:35 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Yi" X-Patchwork-Id: 10004877 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 23FB160360 for ; Fri, 13 Oct 2017 14:29:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 17E8A28631 for ; Fri, 13 Oct 2017 14:29:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0B753286B8; Fri, 13 Oct 2017 14:29:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00, DATE_IN_FUTURE_06_12, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9D5F828631 for ; Fri, 13 Oct 2017 14:29:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758442AbdJMO2c (ORCPT ); Fri, 13 Oct 2017 10:28:32 -0400 Received: from mga07.intel.com ([134.134.136.100]:40358 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753454AbdJMO2b (ORCPT ); Fri, 13 Oct 2017 10:28:31 -0400 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP; 13 Oct 2017 07:28:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.43,371,1503385200"; d="scan'208";a="146134655" Received: from linux.intel.com ([10.54.29.200]) by orsmga002.jf.intel.com with ESMTP; 13 Oct 2017 07:28:30 -0700 Received: from dazhang1-ssd.sh.intel.com (unknown [10.239.48.120]) by linux.intel.com (Postfix) with ESMTP id 58BEE5802C7; Fri, 13 Oct 2017 07:28:29 -0700 (PDT) From: Zhang Yi To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, rkrcmar@redhat.com, Zhang Yi Z Subject: [PATCH RFC 08/10] KVM: VMX: Update the EPT leaf entry indicated with the SPP enable bit. Date: Sat, 14 Oct 2017 07:14:35 +0800 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Zhang Yi Z If the sub-page write permission VM-execution control is set, treatment of write accesses to guest-physical accesses depends on the state of the accumulated write-access bit (position 1) and sub-page permission bit (position 61) in the EPT leaf paging-structure. Software will update the EPT leaf entry sub-page permission bit while kvm_set_subpage. If the EPT write-access bit set to 0 and the SPP bit set to 1 in the leaf EPT paging-structure entry that maps a 4KB page, then the hardware will look up a VMM-managed Sub-Page Permission Table (SPPT), which will also be prepared by setup kvm_set_subpage. Signed-off-by: Zhang Yi Z --- arch/x86/kvm/mmu.c | 100 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 100 insertions(+) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 6c92d19..0bda9eb 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1580,6 +1580,87 @@ int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu) return 0; } +static bool __rmap_open_subpage_bit(struct kvm *kvm, + struct kvm_rmap_head *rmap_head) +{ + struct rmap_iterator iter; + bool flush = false; + u64 *sptep; + u64 spte; + + for_each_rmap_spte(rmap_head, &iter, sptep) { + /* + * SPP works only when the page is unwritable + * and SPP bit is set + */ + flush |= spte_write_protect(sptep, false); + spte = *sptep | PT_SPP_MASK; + flush |= mmu_spte_update(sptep, spte); + } + + return flush; +} + +static int kvm_mmu_open_subpage_write_protect(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn) +{ + struct kvm_rmap_head *rmap_head; + bool flush = false; + + /* + * we only support spp in a normal 4K level 1 page frame + * If it a huge page, we drop it. + */ + rmap_head = __gfn_to_rmap(gfn, PT_PAGE_TABLE_LEVEL, slot); + + if (!rmap_head->val) + return -EFAULT; + + flush |= __rmap_open_subpage_bit(kvm, rmap_head); + + if (flush) + kvm_flush_remote_tlbs(kvm); + + return 0; +} + +static bool __rmap_clear_subpage_bit(struct kvm *kvm, + struct kvm_rmap_head *rmap_head) +{ + struct rmap_iterator iter; + bool flush = false; + u64 *sptep; + u64 spte; + + for_each_rmap_spte(rmap_head, &iter, sptep) { + spte = (*sptep & ~PT_SPP_MASK) | PT_WRITABLE_MASK; + flush |= mmu_spte_update(sptep, spte); + } + + return flush; +} + +static int kvm_mmu_clear_subpage_write_protect(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn) +{ + struct kvm_rmap_head *rmap_head; + bool flush = false; + + rmap_head = __gfn_to_rmap(gfn, PT_PAGE_TABLE_LEVEL, slot); + + if (!rmap_head->val) + return -EFAULT; + + flush |= __rmap_clear_subpage_bit(kvm, rmap_head); + + if (flush) + kvm_flush_remote_tlbs(kvm); + + return 0; +} + bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn) { @@ -4005,12 +4086,31 @@ int kvm_mmu_set_subpages(struct kvm *kvm, struct kvm_subpage *spp_info) int npages = spp_info->npages; struct kvm_memory_slot *slot; u32 *wp_map; + int ret; int i; for (i = 0; i < npages; i++, gfn++) { slot = gfn_to_memslot(kvm, gfn); if (!slot) return -EFAULT; + + /* + * open SPP bit in EPT leaf entry to write protect the + * sub-pages in corresponding page + */ + if (access != (u32)((1ULL << 32) - 1)) + ret = kvm_mmu_open_subpage_write_protect( + kvm, slot, gfn); + else + ret = kvm_mmu_clear_subpage_write_protect( + kvm, slot, gfn); + + if (ret) { + pr_info("SPP ,didn't get the gfn:%llx from EPT leaf level1\n" + "Current we didn't support huge page on SPP\n" + "Please try to disable the huge page\n", gfn); + return -EFAULT; + } wp_map = gfn_to_subpage_wp_info(slot, gfn); *wp_map = access; }