From patchwork Thu Jan 27 08:43:10 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Har'El X-Patchwork-Id: 510781 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p0R8hF9h017507 for ; Thu, 27 Jan 2011 08:43:40 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753677Ab1A0InP (ORCPT ); Thu, 27 Jan 2011 03:43:15 -0500 Received: from mtagate3.uk.ibm.com ([194.196.100.163]:39334 "EHLO mtagate3.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753617Ab1A0InP (ORCPT ); Thu, 27 Jan 2011 03:43:15 -0500 Received: from d06nrmr1707.portsmouth.uk.ibm.com (d06nrmr1707.portsmouth.uk.ibm.com [9.149.39.225]) by mtagate3.uk.ibm.com (8.13.1/8.13.1) with ESMTP id p0R8hD1i028536 for ; Thu, 27 Jan 2011 08:43:13 GMT Received: from d06av08.portsmouth.uk.ibm.com (d06av08.portsmouth.uk.ibm.com [9.149.37.249]) by d06nrmr1707.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p0R8hFPJ1810664 for ; Thu, 27 Jan 2011 08:43:16 GMT Received: from d06av08.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av08.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p0R8hCRc025962 for ; Thu, 27 Jan 2011 08:43:13 GMT Received: from rice.haifa.ibm.com (rice.haifa.ibm.com [9.148.8.217]) by d06av08.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p0R8hBvc025943 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 27 Jan 2011 08:43:12 GMT Received: from rice.haifa.ibm.com (lnx-nyh.haifa.ibm.com [127.0.0.1]) by rice.haifa.ibm.com (8.14.4/8.14.4) with ESMTP id p0R8hBri002752; Thu, 27 Jan 2011 10:43:11 +0200 Received: (from nyh@localhost) by rice.haifa.ibm.com (8.14.4/8.14.4/Submit) id p0R8hA6K002750; Thu, 27 Jan 2011 10:43:10 +0200 Date: Thu, 27 Jan 2011 10:43:10 +0200 Message-Id: <201101270843.p0R8hA6K002750@rice.haifa.ibm.com> X-Authentication-Warning: rice.haifa.ibm.com: nyh set sender to "Nadav Har'El" using -f Cc: gleb@redhat.com, avi@redhat.com To: kvm@vger.kernel.org From: "Nadav Har'El" References: <1296116987-nyh@il.ibm.com> Subject: [PATCH 26/29] nVMX: Additional TSC-offset handling Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Thu, 27 Jan 2011 08:43:41 +0000 (UTC) --- .before/arch/x86/kvm/vmx.c 2011-01-26 18:06:06.000000000 +0200 +++ .after/arch/x86/kvm/vmx.c 2011-01-26 18:06:06.000000000 +0200 @@ -1655,12 +1655,23 @@ static u64 guest_read_tsc(void) static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) { vmcs_write64(TSC_OFFSET, offset); + if (is_guest_mode(vcpu)) + /* + * We are only changing TSC_OFFSET when L2 is running if for + * some reason L1 chose not to trap the TSC MSR. Since + * prepare_vmcs12() does not copy tsc_offset, we need to also + * set the vmcs12 field here. + */ + get_vmcs12_fields(vcpu)->tsc_offset = offset - + to_vmx(vcpu)->nested.vmcs01_fields->tsc_offset; } static void vmx_adjust_tsc_offset(struct kvm_vcpu *vcpu, s64 adjustment) { u64 offset = vmcs_read64(TSC_OFFSET); vmcs_write64(TSC_OFFSET, offset + adjustment); + if (is_guest_mode(vcpu)) + get_vmcs12_fields(vcpu)->tsc_offset += adjustment; } static bool guest_cpuid_has_vmx(struct kvm_vcpu *vcpu)