From patchwork Fri Sep 22 09:20:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 13395419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A604CD4F3B for ; Fri, 22 Sep 2023 09:20:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229971AbjIVJUg (ORCPT ); Fri, 22 Sep 2023 05:20:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229539AbjIVJUe (ORCPT ); Fri, 22 Sep 2023 05:20:34 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F150CE; Fri, 22 Sep 2023 02:20:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:Date:Cc:To: From:Subject:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:Content-ID: Content-Description:In-Reply-To:References; bh=W57ElOJZP8vr7/byiB9rMd/CGVud5Rs3wkop/whm1TY=; b=hDdUQKepKx2SVh4olznS7/WbFW sBu2VgLUlt9EaWI2EWKxrJbzR3TAFT6ievcGibcFciNXTpKS0TcA2J8rvyf1zuL5zDSOMRcAMVm/X i/eoX6fT6GYQy1uI3ZMHuGLeHsIcXKMrUsne0dlHOO+5RzVsEYjrZRgBKaHAyUI35M9ohNv9V4sX8 tjvRLbRowJ/BhBBC7EPc1qTWU0XbdkmyV3PshSI9pIcFNONxC+9SJd4Mka88YTKfXXnURW56lPNw3 3UIg9PykwYTTXwcfgW2reMbsR5OGvDCO5U6+yTNfcFYFAXXB1wZ4hfdBbezWs2+3rRsd1xB9WXW4R RrPbrzCQ==; Received: from [2001:8b0:10b:5:a766:1541:2a7f:69c0] (helo=u3832b3a9db3152.ant.amazon.com) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qjcKj-00HOqF-ME; Fri, 22 Sep 2023 09:20:21 +0000 Message-ID: <445abd6377cf1621a2d306226cab427820583967.camel@infradead.org> Subject: [PATCH] KVM: x86: Use fast path for Xen timer delivery From: David Woodhouse To: kvm Cc: Paul Durrant , Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org, "Griffoul, Fred" Date: Fri, 22 Sep 2023 10:20:20 +0100 User-Agent: Evolution 3.44.4-0ubuntu2 MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: David Woodhouse Most of the time there's no need to kick the vCPU and deliver the timer event through kvm_xen_inject_timer_irqs(). Use kvm_xen_set_evtchn_fast() directly from the timer callback, and only fall back to the slow path when it's necessary to do so. This gives a significant improvement in timer latency testing (using nanosleep() for various periods and then measuring the actual time elapsed). Signed-off-by: David Woodhouse Reviewed-by: Paul Durrant --- arch/x86/kvm/xen.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 40edf4d1974c..66c4cf93a55c 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -134,12 +134,23 @@ static enum hrtimer_restart xen_timer_callback(struct hrtimer *timer) { struct kvm_vcpu *vcpu = container_of(timer, struct kvm_vcpu, arch.xen.timer); + struct kvm_xen_evtchn e; + int rc; + if (atomic_read(&vcpu->arch.xen.timer_pending)) return HRTIMER_NORESTART; - atomic_inc(&vcpu->arch.xen.timer_pending); - kvm_make_request(KVM_REQ_UNBLOCK, vcpu); - kvm_vcpu_kick(vcpu); + e.vcpu_id = vcpu->vcpu_id; + e.vcpu_idx = vcpu->vcpu_idx; + e.port = vcpu->arch.xen.timer_virq; + e.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL; + + rc = kvm_xen_set_evtchn_fast(&e, vcpu->kvm); + if (rc == -EWOULDBLOCK) { + atomic_inc(&vcpu->arch.xen.timer_pending); + kvm_make_request(KVM_REQ_UNBLOCK, vcpu); + kvm_vcpu_kick(vcpu); + } return HRTIMER_NORESTART; }