From patchwork Fri Dec 16 09:40:03 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xuquan (Euler)" X-Patchwork-Id: 9477647 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 273B6607EE for ; Fri, 16 Dec 2016 09:42:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2251A2877A for ; Fri, 16 Dec 2016 09:42:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 12E38287A8; Fri, 16 Dec 2016 09:42:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7D39F2877A for ; Fri, 16 Dec 2016 09:42:51 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cHp0C-0007Gr-1l; Fri, 16 Dec 2016 09:40:32 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cHp0A-0007Gl-RA for xen-devel@lists.xen.org; Fri, 16 Dec 2016 09:40:30 +0000 Received: from [85.158.139.211] by server-4.bemta-5.messagelabs.com id 05/33-25360-E86B3585; Fri, 16 Dec 2016 09:40:30 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprBKsWRWlGSWpSXmKPExsVi9XuGg27vtuA Igw27JC2WfFzM4sDocXT3b6YAxijWzLyk/IoE1ozunZvZCj64Vvw5/5e9gfGwfRcjF4eQwClG iTMPPjFBOBsYJX4+OATkcHKwCehKbD99ihXEFhEwl9i6ZAsjSBGzwG8miRkNa8ESwgKuEl2rj 7NBFHlJfDq2gBHC1pPo7p4NVsMioCpxfuo1sKG8AsESt5ZfBKtnFBCT+H5qDVicWUBcYu60WW D1EgIiEg8vnmaDsMUk/u16CGUrSuzp+8AKUZ8hcfzEf0aImYISJ2c+YYGokZQ4uOIGywRGoVl Ixs5C0jILSQtEXEdiwe5PbBC2tsSyha+ZIexsiS971kPZARLPz59mnwX2/3VGib7NT6CGKkpM 6X7IvoCRcxWjenFqUVlqka6RXlJRZnpGSW5iZo6uoYGpXm5qcXFiempOYlKxXnJ+7iZGYIwxA MEOxu9/nA4xSnIwKYnyeuUGRwjxJeWnVGYkFmfEF5XmpBYfYpTh4FCS4P24BSgnWJSanlqRlp kDjHaYtAQHj5II74vNQGne4oLE3OLMdIjUKUZFKXHeWyB9AiCJjNI8uDZYgrnEKCslzMsIdIg QT0FqUW5mCar8K0ZxDkYlYd7dIFN4MvNK4Ka/AlrMBLTYYh7Y4pJEhJRUA2OmxWOn97KS59Zx HN7J/lBvTxZn/bpmRbMZ7jbHsud+3nvclvFeAscO5/n/nYq2/JMKZcvn4DA81HDI8FXgf5Wl7 4Lzio/LvJepEbj/ffZUqUYTy5OLOo6yXVr6buI8fnmjs5ZH1ttxPyv323Y/XsB0Uprcx7tvJq 0PZucUOCq8zoM/v6krVl+JpTgj0VCLuag4EQDymN7yKwMAAA== X-Env-Sender: xuquan8@huawei.com X-Msg-Ref: server-7.tower-206.messagelabs.com!1481881217!75613358!1 X-Originating-IP: [58.251.152.64] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.1.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 60974 invoked from network); 16 Dec 2016 09:40:21 -0000 Received: from szxga01-in.huawei.com (HELO szxga01-in.huawei.com) (58.251.152.64) by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 16 Dec 2016 09:40:21 -0000 Received: from 172.24.1.36 (EHLO SZXEMI404-HUB.china.huawei.com) ([172.24.1.36]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DWN44708; Fri, 16 Dec 2016 17:40:15 +0800 (CST) Received: from SZXEMI506-MBX.china.huawei.com ([169.254.5.247]) by SZXEMI404-HUB.china.huawei.com ([10.82.75.40]) with mapi id 14.03.0235.001; Fri, 16 Dec 2016 17:40:04 +0800 From: "Xuquan (Quan Xu)" To: "xen-devel@lists.xen.org" Thread-Topic: [PATCH v3] x86/apicv: fix RTC periodic timer and apicv issue Thread-Index: AdJXgF+x7NAvAC0gRtO/HVogviQAzg== Date: Fri, 16 Dec 2016 09:40:03 +0000 Message-ID: Accept-Language: en-US Content-Language: zh-CN X-MS-Has-Attach: yes X-MS-TNEF-Correlator: x-originating-ip: [10.142.69.246] MIME-Version: 1.0 X-CFilter-Loop: Reflected Cc: "yang.zhang.wz@gmail.com" , Lan Tianyu , "Tian, Kevin" , "Nakajima, Jun" , Andrew Cooper , George Dunlap , "Xuquan \(Quan Xu\)" , Jan Beulich Subject: [Xen-devel] [PATCH v3] x86/apicv: fix RTC periodic timer and apicv issue X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From 89fffdd6b563b2723e24d17231715bb8c9f24f90 Mon Sep 17 00:00:00 2001 From: Quan Xu Date: Fri, 16 Dec 2016 17:24:01 +0800 Subject: [PATCH v3] x86/apicv: fix RTC periodic timer and apicv issue When Xen apicv is enabled, wall clock time is faster on Windows7-32 guest with high payload (with 2vCPU, captured from xentrace, in high payload, the count of IPI interrupt increases rapidly between these vCPUs). If IPI intrrupt (vector 0xe1) and periodic timer interrupt (vector 0xd1) are both pending (index of bit set in vIRR), unfortunately, the IPI intrrupt is high priority than periodic timer interrupt. Xen updates IPI interrupt bit set in vIRR to guest interrupt status (RVI) as a high priority and apicv (Virtual-Interrupt Delivery) delivers IPI interrupt within VMX non-root operation without a VM-Exit. Within VMX non-root operation, if periodic timer interrupt index of bit is set in vIRR and highest, the apicv delivers periodic timer interrupt within VMX non-root operation as well. But in current code, if Xen doesn't update periodic timer interrupt bit set in vIRR to guest interrupt status (RVI) directly, Xen is not aware of this case to decrease the count (pending_intr_nr) of pending periodic timer interrupt, then Xen will deliver a periodic timer interrupt again. And that we update periodic timer interrupt in every VM-entry, there is a chance that already-injected instance (before EOI-induced exit happens) will incur another pending IRR setting if there is a VM-exit happens between virtual interrupt injection (vIRR->0, vISR->1) and EOI-induced exit (vISR->0), since pt_intr_post hasn't been invoked yet, then the guest receives more periodic timer interrupt. So we set eoi_exit_bitmap for intack.vector when it's higher than pending periodic time interrupts. This way we can guarantee there's always a chance to post periodic time interrupts when periodic time interrupts becomes the highest one. Signed-off-by: Quan Xu --- xen/arch/x86/hvm/vmx/intr.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) -- 1.7.12.4 diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c index 639a705..d7a5716 100644 --- a/xen/arch/x86/hvm/vmx/intr.c +++ b/xen/arch/x86/hvm/vmx/intr.c @@ -315,9 +315,17 @@ void vmx_intr_assist(void) * Set eoi_exit_bitmap for periodic timer interrup to cause EOI-induced VM * exit, then pending periodic time interrups have the chance to be injected * for compensation + * Set eoi_exit_bitmap for intack.vector when it's higher than pending + * periodic time interrupts. This way we can guarantee there's always a chance + * to post periodic time interrupts when periodic time interrupts becomes the + * highest one */ - if (pt_vector != -1) - vmx_set_eoi_exit_bitmap(v, pt_vector); + if ( pt_vector != -1 ) { + if ( intack.vector > pt_vector ) + vmx_set_eoi_exit_bitmap(v, intack.vector); + else + vmx_set_eoi_exit_bitmap(v, pt_vector); + } /* we need update the RVI field */ __vmread(GUEST_INTR_STATUS, &status); @@ -334,7 +342,8 @@ void vmx_intr_assist(void) __vmwrite(EOI_EXIT_BITMAP(i), v->arch.hvm_vmx.eoi_exit_bitmap[i]); } - pt_intr_post(v, intack); + if ( intack.vector == pt_vector ) + pt_intr_post(v, intack); } else {