From patchwork Tue Sep 20 13:30:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xuquan (Euler)" X-Patchwork-Id: 9341681 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2E6D26077A for ; Tue, 20 Sep 2016 13:33:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 13F6229548 for ; Tue, 20 Sep 2016 13:33:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 07DB5296E2; Tue, 20 Sep 2016 13:33:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A035C29548 for ; Tue, 20 Sep 2016 13:33:33 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bmL8j-0003UL-2I; Tue, 20 Sep 2016 13:31:13 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bmL8i-0003U9-BD for xen-devel@lists.xen.org; Tue, 20 Sep 2016 13:31:12 +0000 Received: from [85.158.137.68] by server-12.bemta-3.messagelabs.com id D2/29-09160-F1A31E75; Tue, 20 Sep 2016 13:31:11 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrAKsWRWlGSWpSXmKPExsVSPpHPUVfO6mG 4wYLHShZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8aa/2/YCtb5VXxYWNnAeMW9i5GLQ0jgFKPE n5Wn2SGc9YwSX5r/MXcxcnKwCWhLTO5ZxgpiiwiYS2xdsoURpIhZ4BqzxJTmuewgCWEBV4mv+ 2axQBR5SdzadQKqQU9i6aw1YHEWAVWJaxcPgsV5BYIlFj05DxZnFBCT+H5qDROIzSwgLjF32i ywGgkBEYmHF0+zQdhiEv92PYSyFSX29H1ghajPkLj67yAzxExBiZMzn7BA1EhKHFxxgwXkUAm B84wSfetOMEMkTCX+LZrCMoFRZBaSfbOQzJqFZBZEXEdiwe5PbBC2tsSyha+ZIexsiS971kPZ ARLPz59mnwUOmOuMEv8XfoNqUJSY0v2QfQEj5ypG9eLUorLUIl1LvaSizPSMktzEzBxdQwNjv dzU4uLE9NScxKRiveT83E2MwJisZ2Bg3MH4+qfTIUZJDiYlUV45vgfhQnxJ+SmVGYnFGfFFpT mpxYcYZTg4lCR4wy0ehgsJFqWmp1akZeYAkwNMWoKDR0mEdwlImre4IDG3ODMdInWKUVFKnLc LJCEAksgozYNrgyWkS4yyUsK8jAwMDEI8BalFuZklqPKvGMU5GJWEeReCTOHJzCuBm/4KaDET 0OItPx+ALC5JREhJNTBKzbkttF59jzjn54rMt3NXBLnHlXiFW7bEz/GbbGnXLn3ppd6ReexCr oYbqiu3G7/z27uKXXnmxLD6OF2fqA1G7VlvplzbpRa37+i5JDVNty79KoXD/D2hwXaTAzgvie 41lf+4XH1Xo9T2CScY/zUdlb2y1+f5rK3KEo9kvf6mzdwu8epV3FUlluKMREMt5qLiRAAtSpf PQwMAAA== X-Env-Sender: xuquan8@huawei.com X-Msg-Ref: server-13.tower-31.messagelabs.com!1474378267!61100874!1 X-Originating-IP: [119.145.14.65] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTE5LjE0NS4xNC42NSA9PiA3NzQ2Mw==\n X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 64739 invoked from network); 20 Sep 2016 13:31:09 -0000 Received: from szxga02-in.huawei.com (HELO szxga02-in.huawei.com) (119.145.14.65) by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 20 Sep 2016 13:31:09 -0000 Received: from 172.24.1.36 (EHLO SZXEMI401-HUB.china.huawei.com) ([172.24.1.36]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DNI61120; Tue, 20 Sep 2016 21:31:05 +0800 (CST) Received: from SZXEMI506-MBS.china.huawei.com ([169.254.6.52]) by SZXEMI401-HUB.china.huawei.com ([10.82.75.33]) with mapi id 14.03.0235.001; Tue, 20 Sep 2016 21:30:53 +0800 From: "Xuquan (Euler)" To: "xen-devel@lists.xen.org" Thread-Topic: [PATCH v2] x86/apicv: fix RTC periodic timer and apicv issue Thread-Index: AdITQtoR4bdW35hUR4eBgL/wdUWlgw== Date: Tue, 20 Sep 2016 13:30:53 +0000 Message-ID: Accept-Language: en-US Content-Language: zh-CN X-MS-Has-Attach: yes X-MS-TNEF-Correlator: x-originating-ip: [10.142.69.246] MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.57E13A1A.01DB, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=169.254.6.52, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: d3a54e4a987ad41b5961c0db1fd3a4de Cc: "yang.zhang.wz@gmail.com" , Kevin Tian , "jbeulich@suse.com" , "George.Dunlap@eu.citrix.com" , Andrew Cooper , "Hanweidong \(Randy\)" , Jiangyifei , "Nakajima, Jun" Subject: [Xen-devel] [PATCH v2] x86/apicv: fix RTC periodic timer and apicv issue X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From 97760602b5c94745e76ed78d23e8fdf9988d234e Mon Sep 17 00:00:00 2001 From: Quan Xu Date: Tue, 20 Sep 2016 21:12:54 +0800 Subject: [PATCH v2] x86/apicv: fix RTC periodic timer and apicv issue When Xen apicv is enabled, wall clock time is faster on Windows7-32 guest with high payload (with 2vCPU, captured from xentrace, in high payload, the count of IPI interrupt increases rapidly between these vCPUs). If IPI intrrupt (vector 0xe1) and periodic timer interrupt (vector 0xd1) are both pending (index of bit set in vIRR), unfortunately, the IPI intrrupt is high priority than periodic timer interrupt. Xen updates IPI interrupt bit set in vIRR to guest interrupt status (RVI) as a high priority and apicv (Virtual-Interrupt Delivery) delivers IPI interrupt within VMX non-root operation without a VM-Exit. Within VMX non-root operation, if periodic timer interrupt index of bit is set in vIRR and highest, the apicv delivers periodic timer interrupt within VMX non-root operation as well. But in current code, if Xen doesn't update periodic timer interrupt bit set in vIRR to guest interrupt status (RVI) directly, Xen is not aware of this case to decrease the count (pending_intr_nr) of pending periodic timer interrupt, then Xen will deliver a periodic timer interrupt again. And that we update periodic timer interrupt in every VM-entry, there is a chance that already-injected instance (before EOI-induced exit happens) will incur another pending IRR setting if there is a VM-exit happens between virtual interrupt injection (vIRR->0, vISR->1) and EOI-induced exit (vISR->0), since pt_intr_post hasn't been invoked yet, then the guest receives more periodic timer interrupt. So change to pt_intr_post in EOI-induced exit handler and skip periodic timer when it is not be completely consumed (irq_issued is ture). Signed-off-by: Yifei Jiang Signed-off-by: Rongguang He Signed-off-by: Quan Xu --- v2: -change to pt_intr_post in EOI-induced exit handler. -skip periodic timer when it is not be completely consumed (irq_issued is ture). --- xen/arch/x86/hvm/vlapic.c | 6 ++++++ xen/arch/x86/hvm/vmx/intr.c | 2 -- xen/arch/x86/hvm/vpt.c | 3 ++- 3 files changed, 8 insertions(+), 3 deletions(-) -- 1.8.3.4 diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index 1d5d287..f83d6ab 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -433,6 +433,12 @@ void vlapic_EOI_set(struct vlapic *vlapic) void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector) { struct domain *d = vlapic_domain(vlapic); + struct vcpu *v = vlapic_vcpu(vlapic); + struct hvm_intack pt_intack; + + pt_intack.vector = vector; + pt_intack.source = hvm_intsrc_lapic; + pt_intr_post(v, pt_intack); if ( vlapic_test_and_clear_vector(vector, &vlapic->regs->data[APIC_TMR]) ) vioapic_update_EOI(d, vector); diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c index 8fca08c..29d9bbf 100644 --- a/xen/arch/x86/hvm/vmx/intr.c +++ b/xen/arch/x86/hvm/vmx/intr.c @@ -333,8 +333,6 @@ void vmx_intr_assist(void) clear_bit(i, &v->arch.hvm_vmx.eoi_exitmap_changed); __vmwrite(EOI_EXIT_BITMAP(i), v->arch.hvm_vmx.eoi_exit_bitmap[i]); } - - pt_intr_post(v, intack); } else { diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c index 5c48fdb..a9da436 100644 --- a/xen/arch/x86/hvm/vpt.c +++ b/xen/arch/x86/hvm/vpt.c @@ -252,7 +252,8 @@ int pt_update_irq(struct vcpu *v) } else { - if ( (pt->last_plt_gtime + pt->period) < max_lag ) + if ( (pt->last_plt_gtime + pt->period) < max_lag && + !pt->irq_issued ) { max_lag = pt->last_plt_gtime + pt->period; earliest_pt = pt;