From patchwork Sat Jun 8 03:15:37 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 2692181 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 9416CDFB78 for ; Sat, 8 Jun 2013 03:16:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751138Ab3FHDPq (ORCPT ); Fri, 7 Jun 2013 23:15:46 -0400 Received: from mail-pb0-f49.google.com ([209.85.160.49]:53217 "EHLO mail-pb0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751041Ab3FHDPo (ORCPT ); Fri, 7 Jun 2013 23:15:44 -0400 Received: by mail-pb0-f49.google.com with SMTP id jt11so5318351pbb.8 for ; Fri, 07 Jun 2013 20:15:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :content-type:content-transfer-encoding; bh=IxtdanqmQc/UtZ71Jdxh2lv+br8sUf/XZmPrzVKGFAY=; b=gxidICg39NN7Lq4Xad3q0te7bBcbnT+yDd3R3Y+nBFo9JICl3poB79l60O5wv9F0th JipKhg/pTvzJVqqG5OLkespX9oK6f5ePRJWGc3nQUac6ZmAqXinX3Fxg3KclKFhxSjw3 DFWmLlBEpON2wZHoYdnTL7EDj1WFSaeIhNfxqHEj2Fcu+Z7+88I/JzDFGdN3Ld19TSIh RA3DC0HZEJ/D7YF8o4bVeSeaImyfZtjPof/8IcJwy6GvxPNoqlxxXVToZ0v05HZJhxck YcOB2Q6VhTUAjkA1mVTYRkicW1eTPjpOqiPj4dJruVQibaUAXlSJVypLLbkiojQoynx4 gUVw== X-Received: by 10.68.189.194 with SMTP id gk2mr1296977pbc.193.1370661344074; Fri, 07 Jun 2013 20:15:44 -0700 (PDT) Received: from ericxiao.site ([220.248.0.154]) by mx.google.com with ESMTPSA id uv1sm1358513pbc.16.2013.06.07.20.15.40 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 07 Jun 2013 20:15:43 -0700 (PDT) Message-ID: <51B2A1D9.6060306@gmail.com> Date: Sat, 08 Jun 2013 11:15:37 +0800 From: Xiao Guangrong User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130510 Thunderbird/17.0.6 MIME-Version: 1.0 To: Gleb Natapov CC: Paolo Bonzini , Marcelo Tosatti , LKML , KVM Subject: [PATCH] KVM: x86: fix missed memory synchronization when patch hypercall Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Xiao Guangrong Currently, memory synchronization is missed in emulator_fix_hypercall, please see the commit 758ccc89b83 (KVM: x86: drop calling kvm_mmu_zap_all in emulator_fix_hypercall) This patch fixes it by introducing kvm_vcpus_hang_on_page_start() and kvm_vcpus_hang_on_page_end which unmap the patched page from guest and use kvm_flush_remote_tlbs() as the serializing instruction to ensure the memory coherence [ The SDM said that INVEPT, INVVPID and MOV (to control register, with the exception of MOV CR8) are the serializing instructions. ] The mmu-lock is held during host patches the page so that it stops vcpus to fix its further page fault Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 25 +++++++++++++++++++++++++ arch/x86/kvm/mmu.h | 3 +++ arch/x86/kvm/x86.c | 7 +++++++ 3 files changed, 35 insertions(+) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 7d50a2d..35cd0b6 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4536,6 +4536,31 @@ int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]) } EXPORT_SYMBOL_GPL(kvm_mmu_get_spte_hierarchy); +/* + * Force vcpu to hang when it is trying to access the specified page. + * + * kvm_vcpus_hang_on_page_start and kvm_vcpus_hang_on_page_end should + * be used in pairs and they are currently used to sync memory access + * between vcpus when host cross-modifies the code segment of guest. + * + * We unmap the page from the guest and do memory synchronization by + * kvm_flush_remote_tlbs() under the protection of mmu-lock. If vcpu + * accesses the page, it will trigger #PF and be blocked on mmu-lock. + */ +void kvm_vcpus_hang_on_page_start(struct kvm *kvm, gfn_t gfn) +{ + spin_lock(&kvm->mmu_lock); + + /* kvm_flush_remote_tlbs() can act as serializing instruction. */ + if (kvm_unmap_hva(kvm, gfn_to_hva(kvm, gfn))) + kvm_flush_remote_tlbs(kvm); +} + +void kvm_vcpus_hang_on_page_end(struct kvm *kvm) +{ + spin_unlock(&kvm->mmu_lock); +} + void kvm_mmu_destroy(struct kvm_vcpu *vcpu) { ASSERT(vcpu); diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 5b59c57..35910be 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -115,4 +115,7 @@ static inline bool permission_fault(struct kvm_mmu *mmu, unsigned pte_access, } void kvm_mmu_invalidate_zap_all_pages(struct kvm *kvm); + +void kvm_vcpus_hang_on_page_start(struct kvm *kvm, gfn_t gfn); +void kvm_vcpus_hang_on_page_end(struct kvm *kvm); #endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9e4afa7..776bf1a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5528,8 +5528,15 @@ static int emulator_fix_hypercall(struct x86_emulate_ctxt *ctxt) struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); char instruction[3]; unsigned long rip = kvm_rip_read(vcpu); + gpa_t gpa; + + gpa = kvm_mmu_gva_to_gpa_fetch(vcpu, rip, NULL); + if (gpa == UNMAPPED_GVA) + return X86EMUL_PROPAGATE_FAULT; + kvm_vcpus_hang_on_page_start(vcpu->kvm, gpa_to_gfn(gpa)); kvm_x86_ops->patch_hypercall(vcpu, instruction); + kvm_vcpus_hang_on_page_end(vcpu->kvm); return emulator_write_emulated(ctxt, rip, instruction, 3, NULL); }