From patchwork Mon Sep 18 09:06:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 13389254 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7AA7CD13D9 for ; Mon, 18 Sep 2023 09:07:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239133AbjIRJGp (ORCPT ); Mon, 18 Sep 2023 05:06:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240965AbjIRJGj (ORCPT ); Mon, 18 Sep 2023 05:06:39 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B6DDC5; Mon, 18 Sep 2023 02:06:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=MIME-Version:Content-Type:Date:Cc:To: From:Subject:Message-ID:Sender:Reply-To:Content-Transfer-Encoding:Content-ID: Content-Description:In-Reply-To:References; bh=jhKFzfaIrj2yifqprfN7tR//Lqbk6JFSdT37dJb4VL0=; b=QKabkEXGZDcccenpwlyT7dPry+ X9cTG2wiEFPIZzCpCjycweSVM2LSkPZQHFGryCsStYAzO6U+MAoqfBZ49NbBc8+RO1tCQ6Y6EFn4A Y8Q62k6yEhLArtKztOidimg+lJH/bto01H+Kmhl6tATYnFIyrPblOYwamNpOeLkJfksYnpyd+gg/f vtDWjisOGCkCJHGhTlhrljyL3QWpoxHIE621C7ngh3m68Ao5jNDR1Q8qTF+1WA7QQUx4Zfd9yzWTx jL+iKq+0WZFF4aY8etkWuJU58wLB6t73fVu9ANO+eh6qnkdbVHo8t/3RCQaoXPOy9hFZsLS9xfwPr XoA2eJmg==; Received: from [2001:8b0:10b:5:3cdb:35b0:ea67:aadb] (helo=u3832b3a9db3152.ant.amazon.com) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1qiAD3-009yJZ-0J; Mon, 18 Sep 2023 09:06:25 +0000 Message-ID: <1b52b557beb6606007f7ec5672eab0adf1606a34.camel@infradead.org> Subject: [RFC] KVM: x86: Allow userspace exit on HLT and MWAIT, else yield on MWAIT From: David Woodhouse To: kvm@vger.kernel.org, Peter Zijlstra Cc: Sean Christopherson , Paolo Bonzini , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org, graf@amazon.de, Nicolas Saenz Julienne , "Griffoul, Fred" Date: Mon, 18 Sep 2023 11:06:24 +0200 User-Agent: Evolution 3.44.4-0ubuntu2 MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: David Woodhouse The VMM may have work to do on behalf of the guest, and it's often desirable to use the cycles when the vCPUS are idle. When the vCPU uses HLT this works out OK because the VMM can run its tasks in a separate thread which gets scheduled when the in-kernel emulation of HLT schedules away. It isn't perfect, because it doesn't easily allow for handling both low-priority maintenance tasks when the VMM wants to wait until the vCPU is idle, and also for higher priority tasks where the VMM does want to preempt the vCPU. It can also lead to noisy neighbour effects, when a host has isn't necessarily sized to expect any given VMM to suddenly be contending for many *more* pCPUs than it has vCPUs. In addition, there are times when we need to expose MWAIT to a guest for compatibility with a previous environment. And MWAIT is much harder because it's very hard to emulate properly. There were attempts at doing so based on marking the target page read- only in MONITOR and triggering the wake when it takes a minor fault, but so far they haven't led to a working solution: https://www.contrib.andrew.cmu.edu/~somlo/OSXKVM/mwait.html So when a guest executes MWAIT, either we've disabled exit-on-mwait and the guest actually sits in non-root mode hogging the pCPU, or if we do enable exit-on-mwait the kernel just treats it as a NOP and bounces right back into the guest to busy-wait round its idle loop. For a start, we can stick a yield() into that busy-loop. The yield() has fairly poorly defined semantics, but it's better than *nothing* and does allow a VMM's thread-based I/O and maintenance tasks to run a *little* better. Better still, we can bounce all the way out to *userspace* on an MWAIT exit, and let the VMM perform some of its pending work right there and then in the vCPU thread before re-entering the vCPU. That's much nicer than yield(). The vCPU is still runnable, since we still don't have a *real* emulation of MWAIT, so the vCPU thread can do a *little* bit of work and then go back into the vCPU for another turn around the loop. And if we're going to do that kind of task processing for MWAIT-idle guests directly from the vCPU thread, it's neater to do it for HLT-idle guests that way too. For HLT, the vCPU *isn't* runnable; it'll be in KVM_MP_STATE_HALTED. The VMM can poll the mp_state and know when the vCPU should be run again. But not poll(), although we might want to hook up something like that (or just a signal or eventfd) for other reasons for VSM anyway. The VMM can also just do some work and then re-enter the vCPU without the corresponding bit set in the kvm_run struct. So, er, what does this patch do? Add a capability, define two bits for exiting to userspace on HLT or MWAIT — in the kvm_run struct rather than needing a separate ioctl to turn them on or off, so that the VMM can make the decision each time it enters the vCPU. Hook it up to (ab?)use the existing KVM_EXIT_HLT which was previously only used when the local APIC was emulated in userspace, and add a new KVM_EXIT_MWAIT. Fairly much untested. If this approach seems reasonable, of course I'll add test cases and proper documentation before posting it for real. This is the proof of concept before we even put it through testing to see what performance we get out of it especially for those obnoxious MWAIT-enabled guests. Signed-off-by: David Woodhouse diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a6582c1fd8b9..8f931539114a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2128,9 +2128,23 @@ static int kvm_emulate_monitor_mwait(struct kvm_vcpu *vcpu, const char *insn) pr_warn_once("%s instruction emulated as NOP!\n", insn); return kvm_emulate_as_nop(vcpu); } + int kvm_emulate_mwait(struct kvm_vcpu *vcpu) { - return kvm_emulate_monitor_mwait(vcpu, "MWAIT"); + int ret = kvm_emulate_monitor_mwait(vcpu, "MWAIT"); + + if (ret && kvm_userspace_exit(vcpu, KVM_EXIT_MWAIT)) { + vcpu->run->exit_reason = KVM_EXIT_MWAIT; + ret = 0; + } else { + /* + * Calling yield() has poorly defined semantics, but the + * guest is in a busy loop and it's the best we can do + * without a full emulation of MONITOR/MWAIT. + */ + yield(); + } + return ret; } EXPORT_SYMBOL_GPL(kvm_emulate_mwait); @@ -4554,6 +4568,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) r |= KVM_X86_DISABLE_EXITS_MWAIT; } break; + case KVM_CAP_X86_USERSPACE_EXITS: + r = KVM_X86_USERSPACE_VALID_EXITS; + break; case KVM_CAP_X86_SMM: if (!IS_ENABLED(CONFIG_KVM_SMM)) break; @@ -9643,11 +9660,11 @@ static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason) ++vcpu->stat.halt_exits; if (lapic_in_kernel(vcpu)) { vcpu->arch.mp_state = state; - return 1; - } else { - vcpu->run->exit_reason = reason; - return 0; + if (!kvm_userspace_exit(vcpu, reason)) + return 1; } + vcpu->run->exit_reason = reason; + return 0; } int kvm_emulate_halt_noskip(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 1e7be1f6ab29..ce10a809151c 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -430,6 +430,19 @@ static inline bool kvm_notify_vmexit_enabled(struct kvm *kvm) return kvm->arch.notify_vmexit_flags & KVM_X86_NOTIFY_VMEXIT_ENABLED; } +static inline bool kvm_userspace_exit(struct kvm_vcpu *vcpu, int reason) +{ + if (reason == KVM_EXIT_HLT && + (vcpu->run->userspace_exits & KVM_X86_USERSPACE_EXIT_HLT)) + return true; + + if (reason == KVM_EXIT_MWAIT && + (vcpu->run->userspace_exits & KVM_X86_USERSPACE_EXIT_MWAIT)) + return true; + + return false; +} + enum kvm_intr_type { /* Values are arbitrary, but must be non-zero. */ KVM_HANDLING_IRQ = 1, diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 13065dd96132..43d94d49fc24 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -264,6 +264,7 @@ struct kvm_xen_exit { #define KVM_EXIT_RISCV_SBI 35 #define KVM_EXIT_RISCV_CSR 36 #define KVM_EXIT_NOTIFY 37 +#define KVM_EXIT_MWAIT 38 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -283,7 +284,8 @@ struct kvm_run { /* in */ __u8 request_interrupt_window; __u8 immediate_exit; - __u8 padding1[6]; + __u8 userspace_exits; + __u8 padding1[5]; /* out */ __u32 exit_reason; @@ -841,6 +843,11 @@ struct kvm_ioeventfd { KVM_X86_DISABLE_EXITS_PAUSE | \ KVM_X86_DISABLE_EXITS_CSTATE) +#define KVM_X86_USERSPACE_EXIT_MWAIT (1 << 0) +#define KVM_X86_USERSPACE_EXIT_HLT (1 << 1) +#define KVM_X86_USERSPACE_VALID_EXITS (KVM_X86_USERSPACE_EXIT_MWAIT | \ + KVM_X86_USERSPACE_EXIT_HLT) + /* for KVM_ENABLE_CAP */ struct kvm_enable_cap { /* in */ @@ -1192,6 +1199,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_COUNTER_OFFSET 227 #define KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE 228 #define KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES 229 +#define KVM_CAP_X86_USERSPACE_EXITS 230 #ifdef KVM_CAP_IRQ_ROUTING