From patchwork Tue Apr 30 19:31:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13649901 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 69579C10F16 for ; Tue, 30 Apr 2024 19:32:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID :Mime-Version:Date:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=SKHUct9/M9+ZIRwWNDwizJ0zhHPv9PDdj7fYoUi0aEI=; b=Q/XipPUCMP48XG fPBLjsxR+n3msJ10KsbC3b25wXEe7kEKkceEPAHrHL9wCepOfvxYPrOdIwDHEAEoR5KwoyErkD6nN YKrSPoN6DO2ffa6+untPchW8aTM72NlouVKpWCp5oWnO5ithYxN7J3KCaTRpEccPjcw9bLXdOhUtn VSqxFc+7IeeHgACItaaVVLmBbgEyKC+XosLV65kSXQWZKDdqVGHSS213UNzLZjbbvXjc0dqUcjWmL R+r+TSQoMKlsj+nP/F7oy1ow1cEzOQj5gBK0dSIObUViHTx0ImWXU9tgxpmoRsH0eV6MfELRoGOwP FwB+ow9tcs/MZogxXCJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1tD0-00000007ido-3eg7; Tue, 30 Apr 2024 19:32:10 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1tCx-00000007ib7-1yH3 for linux-arm-kernel@lists.infradead.org; Tue, 30 Apr 2024 19:32:09 +0000 Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-2a5d989e820so6921557a91.2 for ; Tue, 30 Apr 2024 12:32:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714505520; x=1715110320; darn=lists.infradead.org; h=cc:to:from:subject:message-id:mime-version:date:reply-to:from:to:cc :subject:date:message-id:reply-to; bh=z8cKIw53BRkz+z0+cac0VztUPzZ2bkNPe6LEvW4OcBU=; b=zOiSmv8GuhC/tbT+GG1ZeNmY1FR8gaBwb09DYAZ+m7HdqalJL8KYyMxl7nB3nV9jyz PIA7iGF5yQyWpu8+F0oNfKaaT2U08A7KfF8voGUatGoYmTBjQHaR7GkTgsKwjQL5qbEr D+pG3wkl0Lo9/Eg3/cvTzhXeDkiZoE52Tbyo70JXcjgQTgkb/r7Vkke7ftu/zNfwdRkZ vX/iev8mHTkcGbZvqL4f5If710695WDAqw9xYRrY4GyvMH5Vf8SS8iAwj8OrzUGyGZM4 +qdt0lsPB4G7QUtRFpdJs0c7lxsD4gRbSrn/VnA7fAU2MI6BnZThTLDolSWI0K1qSqcP jg1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714505520; x=1715110320; h=cc:to:from:subject:message-id:mime-version:date:reply-to :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z8cKIw53BRkz+z0+cac0VztUPzZ2bkNPe6LEvW4OcBU=; b=ow3epiwvcQv477w26ntHiv4LDSkAsE1ecs4wMoIZvxePbZ8TA+sY2oOIZdZrb8q9HQ 8mUissW5ZjwzEy5W8xHaXuIuLXx7R0BbggYvapVg3V64hq5hWiTCbU1L5BnQTgcVk2bt LcjzEfgELFmJJR8Ux7NkUG556Akh+UZ/f9Mmlz4pTbVshgqLytnJr88azBiFaDXWCXUO nE+QKFrLmBEE4JjMWAtJUgmjxs9++mxnQw8iAQaKWMlN0hkxeiwhvj9HWCTVSP1TMnGG OMJbm4PSyR7TqbsjyPWU+boUxIHrrBOTTqvjy/FLc9xnUnSpiJq0sCO7/e2W8MEbHC4k e74g== X-Gm-Message-State: AOJu0YwHY28EjOZ5yEM0qsmkoe2MZT6ejptwnECy0nX3Izu9zIfq1mkT CpxF7ACf/wFxI2L5ch065BKymPFZpWdz0LT71Kyc0DkC4KsB4FMYGKJ0r/BOvQjJsog1i0VehUd 7vA== X-Google-Smtp-Source: AGHT+IH3JUNqnUxIpxTDCx/ljNN0s2y6mLqDYiJH9gncvZUrZrTQ/cL7Kjz1TcqYgEZ6lTuIQ7azs+1F9y4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:e504:b0:2a2:ff01:dd7c with SMTP id t4-20020a17090ae50400b002a2ff01dd7cmr1190pjy.8.1714505520000; Tue, 30 Apr 2024 12:32:00 -0700 (PDT) Date: Tue, 30 Apr 2024 12:31:53 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430193157.419425-1-seanjc@google.com> Subject: [PATCH 0/4] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240430_123207_536200_A3143744 X-CRM114-Status: GOOD ( 13.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Drop kvm_arch_sched_in() and instead pass a @sched_in boolean to kvm_arch_vcpu_load(). While fiddling with an idea for optimizing state management on AMD CPUs, I wanted to skip re-saving certain host state when a vCPU is scheduled back in, as the state (theoretically) shouldn't change for the task while it's scheduled out. Actually doing that was annoying and unnecessarily brittle due to having a separate API for the kvm_sched_in() case (the state save needed to be in kvm_arch_vcpu_load() for the common path). E.g. I could have set a "temporary"-ish flag somewhere in kvm_vcpu, but (a) that's gross and (b) it would rely on the arbitrary ordering between sched_in() and vcpu_load() staying the same. The only real downside I see is that arm64 and riscv end up having to pass "false" for their direct usage of kvm_arch_vcpu_load(), and passing boolean literals isn't ideal. But that can be solved by adding an inner helper that omits the @sched_in param (I almost added a patch to do that, but I couldn't convince myself it was necessary). The other motivation for this is to avoid yet another arch hook, and more arbitrary ordering, if there's a future need to hook kvm_sched_out() (we've come close on the x86 side several times). Sean Christopherson (4): KVM: Plumb in a @sched_in flag to kvm_arch_vcpu_load() KVM: VMX: Move PLE grow/shrink helpers above vmx_vcpu_load() KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() KVM: Delete the now unused kvm_arch_sched_in() arch/arm64/include/asm/kvm_host.h | 1 - arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/emulate-nested.c | 4 +- arch/arm64/kvm/reset.c | 2 +- arch/loongarch/include/asm/kvm_host.h | 1 - arch/loongarch/kvm/vcpu.c | 2 +- arch/mips/include/asm/kvm_host.h | 1 - arch/mips/kvm/mmu.c | 2 +- arch/powerpc/include/asm/kvm_host.h | 1 - arch/powerpc/kvm/powerpc.c | 2 +- arch/riscv/include/asm/kvm_host.h | 1 - arch/riscv/kvm/vcpu.c | 4 +- arch/s390/include/asm/kvm_host.h | 1 - arch/s390/kvm/kvm-s390.c | 2 +- arch/x86/include/asm/kvm-x86-ops.h | 1 - arch/x86/include/asm/kvm_host.h | 4 +- arch/x86/kvm/pmu.c | 6 +-- arch/x86/kvm/svm/svm.c | 13 ++--- arch/x86/kvm/vmx/main.c | 2 - arch/x86/kvm/vmx/vmx.c | 75 +++++++++++++-------------- arch/x86/kvm/vmx/x86_ops.h | 3 +- arch/x86/kvm/x86.c | 26 +++++----- include/linux/kvm_host.h | 4 +- virt/kvm/kvm_main.c | 5 +- 24 files changed, 70 insertions(+), 95 deletions(-) base-commit: a96cb3bf390eebfead5fc7a2092f8452a7997d1b