From patchwork Wed Jan 4 12:21:20 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xuquan (Euler)" X-Patchwork-Id: 9496585 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 61C9F606A9 for ; Wed, 4 Jan 2017 12:24:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5067327E71 for ; Wed, 4 Jan 2017 12:24:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 44A4D27F0B; Wed, 4 Jan 2017 12:24:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7074A27E71 for ; Wed, 4 Jan 2017 12:24:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cOkZb-0002Bl-3c; Wed, 04 Jan 2017 12:21:43 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cOkZa-0002BZ-Aq for xen-devel@lists.xen.org; Wed, 04 Jan 2017 12:21:42 +0000 Received: from [85.158.137.68] by server-7.bemta-3.messagelabs.com id 87/65-23854-5D8EC685; Wed, 04 Jan 2017 12:21:41 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrPKsWRWlGSWpSXmKPExsVi9XuGg+6VFzk RBqe3SFss+biYxYHR4+ju30wBjFGsmXlJ+RUJrBnvp7awFqx1rbiyaz1LA+NM+y5GTg4hgTOM Eo0XqrsYuYDsDYwSRz/PZAJJsAnoSmw/fYoVxBYRMJfYumQLI0gRs8BTJokbO6eCJYQFXCV2T /zOBFHkJfFkz1MoW09ic9saMJtFQEXi4dR7jCA2r0CwxJUVTWC9jAJiEt9PQdQwC4hLzJ02Cy wuISAi8fDiaTYIW0zi366HULaixJ6+D0A1HED1GRJzbipCjBSUODnzCQtEiaTEwRU3WEDulBC 4wCjxbnUXVMJUoun4GvYJjCKzkKybhTBqFpJRECU6Egt2f2KDsLUlli18zQxhZ0t82bMeyg6Q eH7+NPsscLBcZ5Tof9rEBJFQlJjS/ZB9ASPnKkaN4tSistQiXUNTvaSizPSMktzEzBxdQwNjv dzU4uLE9NScxKRiveT83E2MwIhkAIIdjGu2ex5ilORgUhLl7WvPiRDiS8pPqcxILM6ILyrNSS 0+xCjDwaEkwfv4OVBOsCg1PbUiLTMHmBpg0hIcPEoivLbA9CDEW1yQmFucmQ6ROsWoKCXO+wO kTwAkkVGaB9cGS0eXGGWlhHkZgQ4R4ilILcrNLEGVf8UozsGoJMw7A2QKT2ZeCdz0V0CLmYAW bw/IBllckoiQkmpgXBnwQHnKy/UnTsyqOlLbc7vSaVl2oIax0T+pz+YfQ3+uEOGyMay/9Eagl eedptfyrew3fF7c63h2K2KLUKLZs+eTjtVtZ3rFYlyS9v1bjZkd1/s56S28Wh6fSl3kBG/sPn dsY+r9xrYpW+18NCZcSFyw+MRqq6MKfuumrUx6HJTHf0H6YzRDiBJLcUaioRZzUXEiABtyA89 CAwAA X-Env-Sender: xuquan8@huawei.com X-Msg-Ref: server-8.tower-31.messagelabs.com!1483532492!79004415!1 X-Originating-IP: [58.251.152.64] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 39305 invoked from network); 4 Jan 2017 12:21:39 -0000 Received: from szxga01-in.huawei.com (HELO szxga01-in.huawei.com) (58.251.152.64) by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 4 Jan 2017 12:21:39 -0000 Received: from 172.24.1.136 (EHLO SZXEMI414-HUB.china.huawei.com) ([172.24.1.136]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DXK44123; Wed, 04 Jan 2017 20:21:28 +0800 (CST) Received: from SZXEMI506-MBX.china.huawei.com ([169.254.5.247]) by SZXEMI414-HUB.china.huawei.com ([10.86.210.49]) with mapi id 14.03.0235.001; Wed, 4 Jan 2017 20:21:21 +0800 From: "Xuquan (Quan Xu)" To: "xen-devel@lists.xen.org" Thread-Topic: [PATCH v5] x86/apicv: fix RTC periodic timer and apicv issue Thread-Index: AdJmhQ3fcFapPKR7T+m9jE2X/zRIqA== Date: Wed, 4 Jan 2017 12:21:20 +0000 Message-ID: Accept-Language: en-US Content-Language: zh-CN X-MS-Has-Attach: yes X-MS-TNEF-Correlator: x-originating-ip: [10.142.69.246] MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090202.586CE8C8.016A, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=169.254.5.247, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 1f258c8b041b0665cb95fe1d23ab026a Cc: "yang.zhang.wz@gmail.com" , Lan Tianyu , Kevin Tian , Jan Beulich , Andrew Cooper , George Dunlap , Jun Nakajima , Chao Gao Subject: [Xen-devel] [PATCH v5] x86/apicv: fix RTC periodic timer and apicv issue X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From 9c23e1ff3eb75d71d691778a2e83421f645902fb Mon Sep 17 00:00:00 2001 From: Quan Xu Date: Wed, 4 Jan 2017 20:03:31 +0800 Subject: [PATCH v5] x86/apicv: fix RTC periodic timer and apicv issue When Xen apicv is enabled, wall clock time is faster on Windows7-32 guest with high payload (with 2vCPU, captured from xentrace, in high payload, the count of IPI interrupt increases rapidly between these vCPUs). If IPI intrrupt (vector 0xe1) and periodic timer interrupt (vector 0xd1) are both pending (index of bit set in vIRR), unfortunately, the IPI intrrupt is high priority than periodic timer interrupt. Xen updates IPI interrupt bit set in vIRR to guest interrupt status (RVI) as a high priority and apicv (Virtual-Interrupt Delivery) delivers IPI interrupt within VMX non-root operation without a VM-Exit. Within VMX non-root operation, if periodic timer interrupt index of bit is set in vIRR and highest, the apicv delivers periodic timer interrupt within VMX non-root operation as well. But in current code, if Xen doesn't update periodic timer interrupt bit set in vIRR to guest interrupt status (RVI) directly, Xen is not aware of this case to decrease the count (pending_intr_nr) of pending periodic timer interrupt, then Xen will deliver a periodic timer interrupt again. And that we update periodic timer interrupt in every VM-entry, there is a chance that already-injected instance (before EOI-induced exit happens) will incur another pending IRR setting if there is a VM-exit happens between virtual interrupt injection (vIRR->0, vISR->1) and EOI-induced exit (vISR->0), since pt_intr_post hasn't been invoked yet, then the guest receives more periodic timer interrupt. So we set eoi_exit_bitmap for intack.vector - give a chance to post periodic time interrupts when periodic time interrupts become the highest one. Signed-off-by: Quan Xu Acked-by: Kevin Tian Tested-by: Chao Gao --- xen/arch/x86/hvm/vmx/intr.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) -- 1.7.12.4 diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c index 639a705..4d60eec 100644 --- a/xen/arch/x86/hvm/vmx/intr.c +++ b/xen/arch/x86/hvm/vmx/intr.c @@ -312,13 +312,14 @@ void vmx_intr_assist(void) unsigned int i, n; /* - * Set eoi_exit_bitmap for periodic timer interrup to cause EOI-induced VM - * exit, then pending periodic time interrups have the chance to be injected - * for compensation + * intack.vector is the highest priority vector. So we set eoi_exit_bitmap + * for intack.vector - give a chance to post periodic time interrupts when + * periodic time interrupts become the highest one */ - if (pt_vector != -1) - vmx_set_eoi_exit_bitmap(v, pt_vector); - + if ( pt_vector != -1 ) { + ASSERT(intack.vector >= pt_vector); + vmx_set_eoi_exit_bitmap(v, intack.vector); + } /* we need update the RVI field */ __vmread(GUEST_INTR_STATUS, &status); status &= ~VMX_GUEST_INTR_STATUS_SUBFIELD_BITMASK; @@ -334,7 +335,8 @@ void vmx_intr_assist(void) __vmwrite(EOI_EXIT_BITMAP(i), v->arch.hvm_vmx.eoi_exit_bitmap[i]); } - pt_intr_post(v, intack); + if ( intack.vector == pt_vector ) + pt_intr_post(v, intack); } else {