From patchwork Thu Feb 24 12:48:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 12758490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D5A7C433F5 for ; Thu, 24 Feb 2022 12:48:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234491AbiBXMtM (ORCPT ); Thu, 24 Feb 2022 07:49:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232184AbiBXMtK (ORCPT ); Thu, 24 Feb 2022 07:49:10 -0500 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0FF41D8AA1 for ; Thu, 24 Feb 2022 04:48:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding: Content-Type:MIME-Version:Message-Id:Date:Subject:Cc:To:From:Reply-To: Content-ID:Content-Description:In-Reply-To:References; bh=FKZviunFbFF99AL1erQXB92t53XqZFsK0IHWzyOGzzU=; b=nxZCKlRJIS/NQ0jhM6CpNJzpXA eX72U8SubS/ToCrOQUpLjDshBFHSKJGGooAxmQpNvF+hkqyeGtOJh7muKE39AFsBqRfNMBSwY4WbR xgtsbPQuQILUK8veSyLAwWYP8/FFC+eQ/m02CgJtXVhg0V4ceKXQ8VrvRx/e8np/jAzTTdwR7dzic PWsW5M3tdCmlmeHKbb5JSHhqo0l3Ea4Gz3WPn9CBs3oWYlumQfow4CaeKBLXQS/44iyFzmYdSaq+v KCp7H3rnMvY9PqY22wF+AeSmZHhGZwE4uZuKnvPBLypUFWyV+q7FM2y4Skeu9WWAL27NPQU6/y+7i 2ynvOBww==; Received: from [2001:8b0:10b:1:85c4:81a:fb42:714d] (helo=i7.infradead.org) by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nNDXk-00CcPo-Hl; Thu, 24 Feb 2022 12:48:24 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nNDXj-0000tt-CG; Thu, 24 Feb 2022 12:48:23 +0000 From: David Woodhouse To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Joao Martins , Boris Ostrovsky , Metin Kaya , Paul Durrant Subject: [PATCH v1 00/16] KVM: Add Xen event channel acceleration Date: Thu, 24 Feb 2022 12:48:03 +0000 Message-Id: <20220224124819.3315-1-dwmw2@infradead.org> X-Mailer: git-send-email 2.33.1 MIME-Version: 1.0 Sender: David Woodhouse X-SRS-Rewrite: SMTP reverse-path rewritten from by desiato.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This series adds event channel acceleration for Xen guests. In particular it allows guest vCPUs to send each other IPIs without having to bounce all the way out to the userspace VMM in order to do so. Likewise, the Xen singleshot timer is added, and a version of SCHEDOP_poll. Those major features are based on Joao and Boris' patches from 2019. Cleaning up the event delivery into the vcpu_info involved using the new gfn_to_pfn_cache for that, and that means I ended up doing so for *all* the places the guest can have a pvclock. There's a slight wart there, in that we now need to explicitly *clear* the dirty flag in the cache in kvm_xen_destroy_vcpu() to prevent the page being marked dirty from that context when there's no active vCPU — otherwise it would trigger the warning I added in commit 2efd61a608. That's actually OK for the Xen case since the VMM will always know where the regions are and it's reasonable to declare that they should be considered 'always dirty'. I want to give that deferred dirty marking a little more thought for the general case of the gfn_to_pfn_cache though. Changes since my 'v0' proof-of-concept series to invite early heckling: • Drop the runstate fix which is merged now. • Add Sean's gfn_to_pfn_cache API change at the start of the series. • Add KVM self tests • Minor bug fixes Boris Ostrovsky (1): KVM: x86/xen: handle PV spinlocks slowpath David Woodhouse (11): KVM: x86/xen: Use gfn_to_pfn_cache for runstate area KVM: x86: Use gfn_to_pfn_cache for pv_time KVM: x86/xen: Use gfn_to_pfn_cache for vcpu_info KVM: x86/xen: Use gfn_to_pfn_cache for vcpu_time_info KVM: x86/xen: Make kvm_xen_set_evtchn() reusable from other places KVM: x86/xen: Support direct injection of event channel events KVM: x86/xen: Add KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID KVM: x86/xen: Kernel acceleration for XENVER_version KVM: x86/xen: Support per-vCPU event channel upcall via local APIC KVM: x86/xen: Advertise and document KVM_XEN_HVM_CONFIG_EVTCHN_SEND KVM: x86/xen: Add self tests for KVM_XEN_HVM_CONFIG_EVTCHN_SEND Joao Martins (3): KVM: x86/xen: intercept EVTCHNOP_send from guests KVM: x86/xen: handle PV IPI vcpu yield KVM: x86/xen: handle PV timers oneshot mode Sean Christopherson (1): KVM: Use enum to track if cached PFN will be used in guest and/or host Documentation/virt/kvm/api.rst | 129 +- arch/x86/include/asm/kvm_host.h | 23 +- arch/x86/kvm/irq.c | 11 +- arch/x86/kvm/irq_comm.c | 2 +- arch/x86/kvm/x86.c | 123 +- arch/x86/kvm/xen.c | 1257 ++++++++++++++++---- arch/x86/kvm/xen.h | 69 +- include/linux/kvm_host.h | 14 +- include/linux/kvm_types.h | 10 +- include/uapi/linux/kvm.h | 43 + .../testing/selftests/kvm/x86_64/xen_shinfo_test.c | 340 +++++- virt/kvm/pfncache.c | 14 +- 12 files changed, 1700 insertions(+), 335 deletions(-)