From patchwork Mon May 16 19:58:16 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Har'El X-Patchwork-Id: 789682 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p4GJwNta000989 for ; Mon, 16 May 2011 19:58:23 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755474Ab1EPT6U (ORCPT ); Mon, 16 May 2011 15:58:20 -0400 Received: from mtagate3.uk.ibm.com ([194.196.100.163]:36846 "EHLO mtagate3.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755428Ab1EPT6T (ORCPT ); Mon, 16 May 2011 15:58:19 -0400 Received: from d06nrmr1507.portsmouth.uk.ibm.com (d06nrmr1507.portsmouth.uk.ibm.com [9.149.38.233]) by mtagate3.uk.ibm.com (8.13.1/8.13.1) with ESMTP id p4GJwIM0025620 for ; Mon, 16 May 2011 19:58:18 GMT Received: from d06av07.portsmouth.uk.ibm.com (d06av07.portsmouth.uk.ibm.com [9.149.37.248]) by d06nrmr1507.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p4GJwIt82273404 for ; Mon, 16 May 2011 20:58:18 +0100 Received: from d06av07.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av07.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p4GJwHRP030788 for ; Mon, 16 May 2011 13:58:18 -0600 Received: from rice.haifa.ibm.com (rice.haifa.ibm.com [9.148.8.217]) by d06av07.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p4GJwHPA030511 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 16 May 2011 13:58:17 -0600 Received: from rice.haifa.ibm.com (lnx-nyh.haifa.ibm.com [127.0.0.1]) by rice.haifa.ibm.com (8.14.4/8.14.4) with ESMTP id p4GJwG3W002059; Mon, 16 May 2011 22:58:16 +0300 Received: (from nyh@localhost) by rice.haifa.ibm.com (8.14.4/8.14.4/Submit) id p4GJwGNO002057; Mon, 16 May 2011 22:58:16 +0300 Date: Mon, 16 May 2011 22:58:16 +0300 Message-Id: <201105161958.p4GJwGNO002057@rice.haifa.ibm.com> X-Authentication-Warning: rice.haifa.ibm.com: nyh set sender to "Nadav Har'El" using -f Cc: gleb@redhat.com, avi@redhat.com To: kvm@vger.kernel.org From: "Nadav Har'El" References: <1305575004-nyh@il.ibm.com> Subject: [PATCH 28/31] nVMX: Additional TSC-offset handling Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Mon, 16 May 2011 19:58:31 +0000 (UTC) In the unlikely case that L1 does not capture MSR_IA32_TSC, L0 needs to emulate this MSR write by L2 by modifying vmcs02.tsc_offset. We also need to set vmcs12.tsc_offset, for this change to survive the next nested entry (see prepare_vmcs02()). Additionally, we also need to modify vmx_adjust_tsc_offset: The semantics of this function is that the TSC of all guests on this vcpu, L1 and possibly several L2s, need to be adjusted. To do this, we need to adjust vmcs01's tsc_offset (this offset will also apply to each L2s we enter). We can't set vmcs01 now, so we have to remember this adjustment and apply it when we later exit to L1. Signed-off-by: Nadav Har'El --- arch/x86/kvm/vmx.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- .before/arch/x86/kvm/vmx.c 2011-05-16 22:36:50.000000000 +0300 +++ .after/arch/x86/kvm/vmx.c 2011-05-16 22:36:50.000000000 +0300 @@ -1764,12 +1764,24 @@ static void vmx_set_tsc_khz(struct kvm_v static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) { vmcs_write64(TSC_OFFSET, offset); + if (is_guest_mode(vcpu)) + /* + * We're here if L1 chose not to trap the TSC MSR. Since + * prepare_vmcs12() does not copy tsc_offset, we need to also + * set the vmcs12 field here. + */ + get_vmcs12(vcpu)->tsc_offset = offset - + to_vmx(vcpu)->nested.vmcs01_tsc_offset; } static void vmx_adjust_tsc_offset(struct kvm_vcpu *vcpu, s64 adjustment) { u64 offset = vmcs_read64(TSC_OFFSET); vmcs_write64(TSC_OFFSET, offset + adjustment); + if (is_guest_mode(vcpu)) { + /* Even when running L2, the adjustment needs to apply to L1 */ + to_vmx(vcpu)->nested.vmcs01_tsc_offset += adjustment; + } } static u64 vmx_compute_tsc_offset(struct kvm_vcpu *vcpu, u64 target_tsc)