From patchwork Thu Apr 23 09:01:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wanpeng Li X-Patchwork-Id: 11505317 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2574F13B2 for ; Thu, 23 Apr 2020 09:02:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 06BC12166E for ; Thu, 23 Apr 2020 09:02:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="msclw6SV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726955AbgDWJCN (ORCPT ); Thu, 23 Apr 2020 05:02:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726947AbgDWJCL (ORCPT ); Thu, 23 Apr 2020 05:02:11 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7059C03C1AF; Thu, 23 Apr 2020 02:02:11 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id w65so2616619pfc.12; Thu, 23 Apr 2020 02:02:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=kNkf65SCsLjjC7xn3ioH056ZuzoCRKgE8pf6nd0QUo4=; b=msclw6SVWQ8CdgFubQUWR2b0aIGCHDpDlfVkEj+rh/BlhSwevdMfNcpOxfdpeixECk 2W7+rEKVeRBcpFlQBNqYRMsxtua2eYuVEAWFg5g5YYAt5DwFpWGQ2FJSJy+y49RsgOXH hDJB0XfgWcv7trxfn8pIzBeJlr1HI/IVevtVeXz1OKeXTJAaGzIVrnbatblznfoNlRx6 wfiyotX+seQcar5Rswq/Fj/CYZ19Nvi0aJwudkE9GdsmSfzIbGunrAy/ojgDPSpjoRWr VhZNKODSkFu8edrhkAnlwPy6gfqAEOliqvb5zdwU/rrv77QXdkD8UKdmMs/ruCYoNa5j LPPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kNkf65SCsLjjC7xn3ioH056ZuzoCRKgE8pf6nd0QUo4=; b=NGQ3fEFtVAAY32ZUIxCKPRVkPuA2y4tNy/WCIn9aO6OMn+DnsGVBHCixN9PsHiEx0M 6ItpAFP7RITvxK+CmQiHEoVG1sOjBXDo8CMVpwnGAPC51R21C8t+X5I7kFDjZnrcrS1V oPfVHUPWtohQct5d52blJ1YaQH28vM2zj+7p2tuKIMzWu3G4niX8D0mVk+gCrmWoBAsC kO1/lpGklYFjJcApLzTGrPA0GT6MXyU6OUBZjINapDmP0zWqNoQYkkVy54t+YxZMtnbU OT0tER2/vBJARv6zXhLb+5ym3xOjndCX2v9wZpImEO4+Ry9auZogWl15Pi6E/Nf6Qko3 qshw== X-Gm-Message-State: AGi0PuZNU8zqBUNOLb4j7y/bN7oDGNGoKDGemcbDi6RFxU97bO+3jHVN hVzXUUInaUhE922VEZV3ESel7MHG X-Google-Smtp-Source: APiQypLuMYVTXL3yfaUKBicdNeJwz2OnR0h/FLV1EDzHT8f4cJ+x7RMj7RE70fFLxrU/XPdgXOEBfg== X-Received: by 2002:a62:15c5:: with SMTP id 188mr2650761pfv.66.1587632531069; Thu, 23 Apr 2020 02:02:11 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.6]) by smtp.googlemail.com with ESMTPSA id w28sm1574204pgc.26.2020.04.23.02.02.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 23 Apr 2020 02:02:10 -0700 (PDT) From: Wanpeng Li X-Google-Original-From: Wanpeng Li To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Haiwei Li Subject: [PATCH v2 5/5] KVM: VMX: Handle preemption timer fastpath Date: Thu, 23 Apr 2020 17:01:47 +0800 Message-Id: <1587632507-18997-6-git-send-email-wanpengli@tencent.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1587632507-18997-1-git-send-email-wanpengli@tencent.com> References: <1587632507-18997-1-git-send-email-wanpengli@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Wanpeng Li This patch implements handle preemption timer fastpath, after timer fire due to VMX-preemption timer counts down to zero, handle it as soon as possible and vmentry immediately without checking various kvm stuff when possible. Testing on SKX Server. cyclictest in guest(w/o mwait exposed, adaptive advance lapic timer is default -1): 5632.75ns -> 4559.25ns, 19% kvm-unit-test/vmexit.flat: w/o APICv, w/o advance timer: tscdeadline_immed: 4780.75 -> 3851 19.4% tscdeadline: 7474 -> 6528.5 12.7% w/o APICv, w/ adaptive advance timer default -1: tscdeadline_immed: 4845.75 -> 3930.5 18.9% tscdeadline: 6048 -> 5871.75 3% w/ APICv, w/o avanced timer: tscdeadline_immed: 2919 -> 2467.75 15.5% tscdeadline: 5661.75 -> 5188.25 8.4% w/ APICv, w/ adaptive advance timer default -1: tscdeadline_immed: 3018.5 -> 2561 15.2% tscdeadline: 4663.75 -> 4537 2.7% Tested-by: Haiwei Li Cc: Haiwei Li Signed-off-by: Wanpeng Li Reported-by: kbuild test robot --- arch/x86/kvm/lapic.c | 19 +++++++++++++++++++ arch/x86/kvm/lapic.h | 1 + arch/x86/kvm/vmx/vmx.c | 22 ++++++++++++++++++++++ 3 files changed, 42 insertions(+) diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index d652bd9..2741931 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -1899,6 +1899,25 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu) EXPORT_SYMBOL_GPL(kvm_lapic_expired_hv_timer); static void kvm_inject_apic_timer_irqs_fast(struct kvm_vcpu *vcpu); +bool kvm_lapic_expired_hv_timer_fast(struct kvm_vcpu *vcpu) +{ + struct kvm_lapic *apic = vcpu->arch.apic; + struct kvm_timer *ktimer = &apic->lapic_timer; + + if (!apic_lvtt_tscdeadline(apic) || + !ktimer->hv_timer_in_use || + atomic_read(&ktimer->pending)) + return 0; + + WARN_ON(swait_active(&vcpu->wq)); + cancel_hv_timer(apic); + + ktimer->expired_tscdeadline = ktimer->tscdeadline; + kvm_inject_apic_timer_irqs_fast(vcpu); + + return 1; +} +EXPORT_SYMBOL_GPL(kvm_lapic_expired_hv_timer_fast); void kvm_lapic_switch_to_hv_timer(struct kvm_vcpu *vcpu) { diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h index 5ef1364..1b5abd8 100644 --- a/arch/x86/kvm/lapic.h +++ b/arch/x86/kvm/lapic.h @@ -252,6 +252,7 @@ bool kvm_lapic_hv_timer_in_use(struct kvm_vcpu *vcpu); void kvm_lapic_restart_hv_timer(struct kvm_vcpu *vcpu); bool kvm_can_post_timer_interrupt(struct kvm_vcpu *vcpu); int kvm_set_lapic_tscdeadline_msr_fast(struct kvm_vcpu *vcpu, u64 data); +bool kvm_lapic_expired_hv_timer_fast(struct kvm_vcpu *vcpu); static inline enum lapic_mode kvm_apic_mode(u64 apic_base) { diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 2613e58..527d1c1 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6569,12 +6569,34 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp) } } +static void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu); + +static enum exit_fastpath_completion handle_fastpath_preemption_timer(struct kvm_vcpu *vcpu) +{ + struct vcpu_vmx *vmx = to_vmx(vcpu); + + if (kvm_need_cancel_enter_guest(vcpu) || + kvm_event_needs_reinjection(vcpu)) + return EXIT_FASTPATH_NONE; + + if (!vmx->req_immediate_exit && + !unlikely(vmx->loaded_vmcs->hv_timer_soft_disabled) && + kvm_lapic_expired_hv_timer_fast(vcpu)) { + trace_kvm_exit(EXIT_REASON_PREEMPTION_TIMER, vcpu, KVM_ISA_VMX); + return EXIT_FASTPATH_CONT_RUN; + } + + return EXIT_FASTPATH_NONE; +} + static enum exit_fastpath_completion vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu) { if (!is_guest_mode(vcpu)) { switch (to_vmx(vcpu)->exit_reason) { case EXIT_REASON_MSR_WRITE: return handle_fastpath_set_msr_irqoff(vcpu); + case EXIT_REASON_PREEMPTION_TIMER: + return handle_fastpath_preemption_timer(vcpu); default: return EXIT_FASTPATH_NONE; }