From patchwork Tue Sep 20 00:25:10 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xuquan (Euler)" X-Patchwork-Id: 9340805 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2A03A607D0 for ; Tue, 20 Sep 2016 00:28:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1DB58285B6 for ; Tue, 20 Sep 2016 00:28:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1120D28679; Tue, 20 Sep 2016 00:28:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id EF108285EF for ; Tue, 20 Sep 2016 00:28:17 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bm8sN-0007qm-1A; Tue, 20 Sep 2016 00:25:31 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bm8sL-0007o8-P0 for xen-devel@lists.xen.org; Tue, 20 Sep 2016 00:25:29 +0000 Received: from [85.158.143.35] by server-1.bemta-6.messagelabs.com id 7E/E0-21406-9F180E75; Tue, 20 Sep 2016 00:25:29 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprBKsWRWlGSWpSXmKPExsVi9XuGg+6Pxgf hBgc2Slss+biYxYHR4+ju30wBjFGsmXlJ+RUJrBkTF61jK9igUvHpQBdrA+N32S5GLg4hgVOM Ejc/TmOBcDYwSlz7t56pi5GTg01AW2JyzzJWEFtEIE1iUd8yRpAiZoELTBI7j/SzgySEBbwkH i08zA5R5C3R8GclC4TtJzF14ROwOIuAqsTtP1vYQGxegWCJpo/LobYdYJa4dP0zI0iCUyBEYt /kD2A2o4CYxPdTa8CuYBYQl5g7bRbYFRICghKLZu9hhrDFJP7tesgGYStK7On7wApRryOxYPc nNghbW2LZwtfMEIsFJU7OfMICUS8pcXDFDbAjJATOM0r8+dsK1MwB5JhKbN0jM4FRfBaS1bOQ jJ2FZOwsJGMXMLKsYlQvTi0qSy3SNdFLKspMzyjJTczM0TU0MNPLTS0uTkxPzUlMKtZLzs/dx AiMMQYg2MHYfdn/EKMkB5OSKK8c34NwIb6k/JTKjMTijPii0pzU4kOMMhwcShK81sCYFRIsSk 1PrUjLzAFGO0xagoNHSYQ3CiTNW1yQmFucmQ6ROsWoKCXO+64BKCEAksgozYNrgyWYS4yyUsK 8jECHCPEUpBblZpagyr9iFOdgVBLmNQcZz5OZVwI3/RXQYiagxYw990EWlyQipKQaGKV3vWu4 d0DHKuXYhV+3ud/POlen+0KQ2/Lrhvz6HbM4mF9vSJ719G/AgaT1uW/Kd67dtqH2On+wR7CHu /0ulR9Lbv+/XX++tCHR8My8ze1879s2FEXFFM51415sl7rze+yhhxM8Z81b9XDNrgXbfzj8kZ Col1jP8uS+8dqCPR7LGQuVfvWpK+xSYinOSDTUYi4qTgQAoMh8lisDAAA= X-Env-Sender: xuquan8@huawei.com X-Msg-Ref: server-11.tower-21.messagelabs.com!1474331123!33965097!1 X-Originating-IP: [58.251.152.64] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 45840 invoked from network); 20 Sep 2016 00:25:27 -0000 Received: from szxga01-in.huawei.com (HELO szxga01-in.huawei.com) (58.251.152.64) by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 20 Sep 2016 00:25:27 -0000 Received: from 172.24.1.47 (EHLO SZXEMI413-HUB.china.huawei.com) ([172.24.1.47]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DRG09896; Tue, 20 Sep 2016 08:25:18 +0800 (CST) Received: from SZXEMI506-MBS.china.huawei.com ([169.254.6.52]) by SZXEMI413-HUB.china.huawei.com ([10.86.210.41]) with mapi id 14.03.0235.001; Tue, 20 Sep 2016 08:25:11 +0800 From: "Xuquan (Euler)" To: "Tian, Kevin" , "xen-devel@lists.xen.org" Thread-Topic: [RFC PATCH] x86/apicv: fix RTC periodic timer and apicv issue Thread-Index: AQHSAoOI0gJ1eNYjIUW6VScgIQM/5KBwcbzQgASZ0ACAAI0GoIAKi7aAgAF9NXA= Date: Tue, 20 Sep 2016 00:25:10 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.142.69.246] MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090205.57E081EF.00C4, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=169.254.6.52, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: f7a3adf2933c85c6224248a8f8882d2f Cc: "yang.zhang.wz@gmail.com" , "jbeulich@suse.com" , "George.Dunlap@eu.citrix.com" , Andrew Cooper , "Nakajima, Jun" Subject: Re: [Xen-devel] [RFC PATCH] x86/apicv: fix RTC periodic timer and apicv issue X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP On September 19, 2016 5:25 PM, Tian Kevin wrote: >> From: Xuquan (Euler) [mailto:xuquan8@huawei.com] >> Sent: Monday, September 12, 2016 5:08 PM >> >> On September 12, 2016 3:58 PM, Tian, Kevin wrote: >> >> From: Xuquan (Euler) [mailto:xuquan8@huawei.com] >> >> Sent: Friday, September 09, 2016 11:02 AM >> >> >> >> On August 30, 2016 1:58 PM, Tian Kevin < kevin.tian@intel.com > wrote: >> >> >> From: Xuquan (Euler) [mailto:xuquan8@huawei.com] >> >> >> Sent: Friday, August 19, 2016 8:59 PM >> >> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c >> >> index 1d5d287..cc247c3 100644 >> >> --- a/xen/arch/x86/hvm/vlapic.c >> >> +++ b/xen/arch/x86/hvm/vlapic.c >> >> @@ -433,6 +433,11 @@ void vlapic_EOI_set(struct vlapic *vlapic) >> >> void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector) { >> >> struct domain *d = vlapic_domain(vlapic); >> >> + struct hvm_intack pt_intack; >> >> + >> >> + pt_intack.vector = vector; >> >> + pt_intack.source = hvm_intsrc_lapic; >> >> + pt_intr_post(vlapic_vcpu(vlapic), pt_intack); >> >> >> >> if ( vlapic_test_and_clear_vector(vector, >> >&vlapic->regs->data[APIC_TMR]) ) >> >> vioapic_update_EOI(d, vector); diff --git >> >> a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c index >> >> 8fca08c..29d9bbf 100644 >> >> --- a/xen/arch/x86/hvm/vmx/intr.c >> >> +++ b/xen/arch/x86/hvm/vmx/intr.c >> >> @@ -333,8 +333,6 @@ void vmx_intr_assist(void) >> >> clear_bit(i, &v->arch.hvm_vmx.eoi_exitmap_changed); >> >> __vmwrite(EOI_EXIT_BITMAP(i), >> >v->arch.hvm_vmx.eoi_exit_bitmap[i]); >> >> } >> >> - >> >> - pt_intr_post(v, intack); >> >> } >> >> else >> >> { >> >> >> > >> >Because we update pt irq in every vmentry, there is a chance that >> >already-injected instance (before EOI-induced exit happens) will >> >incur another pending IRR setting if there is a VM-exit happens >> >between HW virtual interrupt injection (vIRR->0, vISR->1) and >> >EOI-induced exit (vISR->0), since pt_intr_post hasn't been invoked >> >yet. I guess this is the reason why you still see faster wallclock. >> > >> >> Agreed. A good description. My bad description is from another aspect. >> >> >I think you need mark this pending_intr_post situation explicitly. >> >Then pt_update_irq should skip such pt timer when pending_intr_post >> >of that timer is true (otherwise the update is meaningless since >> >previous one hasn't been posted yet). Then with your change to post >> >in EOI-induced exit handler, it should work correctly to meet the >> >goal >> >- one virtual interrupt delivery for one pending pt intr... >> > >> I think we are at least on the right track. >> But I can't follow ' pending_intr_post ', a new parameter? Thanks. >> >> > >yes, a new parameter to record whether a intr_post operation is pending The existing parameter ' irq_issued ' looks good. I have tested with below modification last night, and it is working. If it is okay, I will send out v2.. Quan ==== modification ===== diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index 1d5d287..cc247c3 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -433,6 +433,11 @@ void vlapic_EOI_set(struct vlapic *vlapic) void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector) { struct domain *d = vlapic_domain(vlapic); + struct hvm_intack pt_intack; + + pt_intack.vector = vector; + pt_intack.source = hvm_intsrc_lapic; + pt_intr_post(vlapic_vcpu(vlapic), pt_intack); if ( vlapic_test_and_clear_vector(vector, &vlapic->regs->data[APIC_TMR]) ) vioapic_update_EOI(d, vector); diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c index 8fca08c..29d9bbf 100644 --- a/xen/arch/x86/hvm/vmx/intr.c +++ b/xen/arch/x86/hvm/vmx/intr.c @@ -333,8 +333,6 @@ void vmx_intr_assist(void) clear_bit(i, &v->arch.hvm_vmx.eoi_exitmap_changed); __vmwrite(EOI_EXIT_BITMAP(i), v->arch.hvm_vmx.eoi_exit_bitmap[i]); } - - pt_intr_post(v, intack); } else { diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c index 5c48fdb..620ca68 100644 --- a/xen/arch/x86/hvm/vpt.c +++ b/xen/arch/x86/hvm/vpt.c @@ -267,6 +267,11 @@ int pt_update_irq(struct vcpu *v) return -1; } + if ( earliest_pt->irq_issued ) + { + spin_unlock(&v->arch.hvm_vcpu.tm_lock); + return -1; + } earliest_pt->irq_issued = 1; irq = earliest_pt->irq; is_lapic = (earliest_pt->source == PTSRC_lapic);