From patchwork Fri Aug 19 12:58:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xuquan (Euler)" X-Patchwork-Id: 9290239 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3BB4B60574 for ; Fri, 19 Aug 2016 13:01:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2CC6029404 for ; Fri, 19 Aug 2016 13:01:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 21B1E29408; Fri, 19 Aug 2016 13:01:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 789BC29404 for ; Fri, 19 Aug 2016 13:01:56 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bajOI-0001VO-Q5; Fri, 19 Aug 2016 12:59:18 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bajOH-0001VF-7E for xen-devel@lists.xen.org; Fri, 19 Aug 2016 12:59:17 +0000 Received: from [85.158.139.211] by server-2.bemta-5.messagelabs.com id C3/98-03032-4A207B75; Fri, 19 Aug 2016 12:59:16 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrAKsWRWlGSWpSXmKPExsVi9XuGg+5ipu3 hBn+3KFgs+biYxYHR4+ju30wBjFGsmXlJ+RUJrBlHu7eyFRyzr1i/8DJLA+NXyy5GLg4hgVOM Er//NLJDOBsYJR7eX8bSxcjJwSagLTG5ZxkriC0iYC6xdckWRhCbWaCZWWLpb0cQW1jATeLrq u0sEDXeEg1/VkLZehKTXnaC1bMIqEq87F/LDGLzCgRL9M5eCFbDKCAm8f3UGiaImeISc6fNAt slISAi8fDiaTYIW0zi366HULaixJ6+D6wQ9RkSf/4+Z4OYKShxcuYTFogaSYmDK26wgDwjIXC BUeLf1ilAizmAHFOJe3cNJjCKzEKybhaSUbOQjIKI60gs2P2JDcLWlli28DUzhJ0t8WXPeig7 QOL5+dPss4DWMQtcZ5S4+2YZI0RCUWJK90P2BYycqxg1ilOLylKLdA0t9JKKMtMzSnITM3N0D Q1M9XJTi4sT01NzEpOK9ZLzczcxAmOSAQh2MDZt9zzEKMnBpCTK+0t/W7gQX1J+SmVGYnFGfF FpTmrxIUYZDg4lCd40xu3hQoJFqempFWmZOcDkAJOW4OBREuGtAEnzFhck5hZnpkOkTjEqSon z2oMkBEASGaV5cG2whHSJUVZKmJcR6BAhnoLUotzMElT5V4ziHIxKwrwNIFN4MvNK4Ka/AlrM BLSYl38LyOKSRISUVAPjzqrdx3PKs39uWd3gV7X1jcq1Rg/JcO2JJd87rM/yHng7+cnL0nQlC Y+MR+1/9XitV4kYHsisD/g5Ldiu/Qffm9VXZKdtDFN4k35wAXsc52r7d+KXJZRUvuw4ndF9Mi 2n8dF/8fspLDcSyv1ZnTNbttfu89TXkP1cUWz9YNvl5lUC37KuLvisxFKckWioxVxUnAgAgv7 xpEMDAAA= X-Env-Sender: xuquan8@huawei.com X-Msg-Ref: server-16.tower-206.messagelabs.com!1471611549!39409630!1 X-Originating-IP: [58.251.152.64] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 18549 invoked from network); 19 Aug 2016 12:59:14 -0000 Received: from szxga01-in.huawei.com (HELO szxga01-in.huawei.com) (58.251.152.64) by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 19 Aug 2016 12:59:14 -0000 Received: from 172.24.1.47 (EHLO SZXEMI403-HUB.china.huawei.com) ([172.24.1.47]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DPT49689; Fri, 19 Aug 2016 20:59:02 +0800 (CST) Received: from SZXEMI506-MBS.china.huawei.com ([169.254.6.251]) by SZXEMI403-HUB.china.huawei.com ([10.83.65.55]) with mapi id 14.03.0235.001; Fri, 19 Aug 2016 20:58:54 +0800 From: "Xuquan (Euler)" To: "xen-devel@lists.xen.org" Thread-Topic: [RFC PATCH] x86/apicv: fix RTC periodic timer and apicv issue Thread-Index: AdH6GWlAeGO11MDvRE2AaS+FhckIPg== Date: Fri, 19 Aug 2016 12:58:53 +0000 Message-ID: Accept-Language: en-US Content-Language: zh-CN X-MS-Has-Attach: yes X-MS-TNEF-Correlator: x-originating-ip: [10.142.69.246] MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.57B70299.01AA, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=169.254.6.251, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 4595d1df4b08f792e8295ee3912a3cba Cc: "yang.zhang.wz@gmail.com" , Kevin Tian , "jbeulich@suse.com" , "George.Dunlap@eu.citrix.com" , Andrew Cooper , "jun.nakajima@intel.com" Subject: [Xen-devel] [RFC PATCH] x86/apicv: fix RTC periodic timer and apicv issue X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From 9b2df963c13ad27e2cffbeddfa3267782ac3da2a Mon Sep 17 00:00:00 2001 From: Quan Xu Date: Fri, 19 Aug 2016 20:40:31 +0800 Subject: [RFC PATCH] x86/apicv: fix RTC periodic timer and apicv issue When Xen apicv is enabled, wall clock time is faster on Windows7-32 guest with high payload (with 2vCPU, captured from xentrace, in high payload, the count of IPI interrupt increases rapidly between these vCPUs). If IPI intrrupt (vector 0xe1) and periodic timer interrupt (vector 0xd1) are both pending (index of bit set in VIRR), unfortunately, the IPI intrrupt is high priority than periodic timer interrupt. Xen updates IPI interrupt bit set in VIRR to guest interrupt status (RVI) as a high priority and apicv (Virtual-Interrupt Delivery) delivers IPI interrupt within VMX non-root operation without a VM exit. Within VMX non-root operation, if periodic timer interrupt index of bit is set in VIRR and highest, the apicv delivers periodic timer interrupt within VMX non-root operation as well. But in current code, if Xen doesn't update periodic timer interrupt bit set in VIRR to guest interrupt status (RVI) directly, Xen is not aware of this case to decrease the count (pending_intr_nr) of pending periodic timer interrupt, then Xen will deliver a periodic timer interrupt again. The guest receives more periodic timer interrupt. If the periodic timer interrut is delivered and not the highest priority, make Xen be aware of this case to decrease the count of pending periodic timer interrupt. Signed-off-by: Yifei Jiang Signed-off-by: Rongguang He Signed-off-by: Quan Xu Reviewed-by: Yang Zhang --- Why RFC: 1. I am not quite sure for other cases, such as nested case. 2. Within VMX non-root operation, an Asynchronous Enclave Exit (including external interrupts, non-maskable interrupt system-management interrrupts, exceptions and VM exit) may occur before delivery of a periodic timer interrupt, the periodic timer interrupt may be lost when a coming periodic timer interrupt is delivered. Actually, and so current code is. --- xen/arch/x86/hvm/vmx/intr.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) -- 1.7.12.4 diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c index 8fca08c..d3a034e 100644 --- a/xen/arch/x86/hvm/vmx/intr.c +++ b/xen/arch/x86/hvm/vmx/intr.c @@ -334,7 +334,21 @@ void vmx_intr_assist(void) __vmwrite(EOI_EXIT_BITMAP(i), v->arch.hvm_vmx.eoi_exit_bitmap[i]); } - pt_intr_post(v, intack); + /* + * If the periodic timer interrut is delivered and not the highest priority, + * make Xen be aware of this case to decrease the count of pending periodic + * timer interrupt. + */ + if ( pt_vector != -1 && intack.vector > pt_vector ) + { + struct hvm_intack pt_intack; + + pt_intack.vector = pt_vector; + pt_intack.source = hvm_intsrc_lapic; + pt_intr_post(v, pt_intack); + } + else + pt_intr_post(v, intack); } else {