From patchwork Fri Apr 19 08:59:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 13635965 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24FA97BB01 for ; Fri, 19 Apr 2024 08:59:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713517177; cv=none; b=sDKVxpvCxAuWcssoHiYm8AK/jo5KswukLn4yoT6tD4DFtIUuaUCHP/19iYf64vzeYIBNo8twZFNNjeQTKHCfgVTm9TMIoEEsppoGa7gZ4HUGV/8D0ReUZeNuAk7HmzWdd4uMNZi1PB6/O7wCG2JqjMLo6Ni93dUWFcu3Q8OVMf4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713517177; c=relaxed/simple; bh=MlYJpZfiuuMe5cB6DpdDfoHLrBt/ox1lOXIQJOlkpek=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=IpMkU/Pp3pIVBcOw/D8V8+08meOXeNAjLdFPK1zOD/afxcCF6Nu6BEMM99uDACprB+F81plPJFE0ceweyNUp4YlqWdC1xnELQ2uWsSHPHQjD/DU7h6DHRfibBEnDrV9c1fh8et2ratbboU6WSBE1UdkiI04jK1LeZhnriJZM64w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=JDaEwGwY; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="JDaEwGwY" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1713517175; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6x7YBb66dnzoHcmKY1tLeMz57M5blqYb9nwNSC1fM+k=; b=JDaEwGwYAwOxsfFfswpvplIGjgIMOqcyE2zI3xDcwEe1v/HE2MQ3F0tORk5MiTp7y30pA8 Q5TvC8CWTbm3c+SvnSLPJhXI2oo/3IebzGjeis48G/zGImS2Dd8UUOkBGHQ4y7OC3MvQDJ LaoOHpcvdxxoGE91CT0MEtdsaS3c/9A= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-626-Qxu4fn7EOdCxpIzEZ7KaRQ-1; Fri, 19 Apr 2024 04:59:29 -0400 X-MC-Unique: Qxu4fn7EOdCxpIzEZ7KaRQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 32F5118A2BC7; Fri, 19 Apr 2024 08:59:29 +0000 (UTC) Received: from virtlab701.virt.lab.eng.bos.redhat.com (virtlab701.virt.lab.eng.bos.redhat.com [10.19.152.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 01EFA20368A8; Fri, 19 Apr 2024 08:59:28 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: isaku.yamahata@intel.com, xiaoyao.li@intel.com, binbin.wu@linux.intel.com, seanjc@google.com, rick.p.edgecombe@intel.com Subject: [PATCH 3/6] KVM: x86/mmu: Extract __kvm_mmu_do_page_fault() Date: Fri, 19 Apr 2024 04:59:24 -0400 Message-ID: <20240419085927.3648704-4-pbonzini@redhat.com> In-Reply-To: <20240419085927.3648704-1-pbonzini@redhat.com> References: <20240419085927.3648704-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 From: Isaku Yamahata Extract out __kvm_mmu_do_page_fault() from kvm_mmu_do_page_fault(). The inner function is to initialize struct kvm_page_fault and to call the fault handler, and the outer function handles updating stats and converting return code. KVM_PRE_FAULT_MEMORY will call the KVM page fault handler. This patch makes the emulation_type always set irrelevant to the return code. kvm_mmu_page_fault() is the only caller of kvm_mmu_do_page_fault(), and references the value only when PF_RET_EMULATE is returned. Therefore, this adjustment doesn't affect functionality. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Isaku Yamahata Message-ID: Signed-off-by: Paolo Bonzini Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu_internal.h | 38 +++++++++++++++++++++------------ 1 file changed, 24 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index e68a60974cf4..9baae6c223ee 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -287,8 +287,8 @@ static inline void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, fault->is_private); } -static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, - u64 err, bool prefetch, int *emulation_type) +static inline int __kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, + u64 err, bool prefetch, int *emulation_type) { struct kvm_page_fault fault = { .addr = cr2_or_gpa, @@ -318,6 +318,27 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, fault.slot = kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn); } + if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) && fault.is_tdp) + r = kvm_tdp_page_fault(vcpu, &fault); + else + r = vcpu->arch.mmu->page_fault(vcpu, &fault); + + if (r == RET_PF_EMULATE && fault.is_private) { + kvm_mmu_prepare_memory_fault_exit(vcpu, &fault); + r = -EFAULT; + } + + if (fault.write_fault_to_shadow_pgtable && emulation_type) + *emulation_type |= EMULTYPE_WRITE_PF_TO_SP; + + return r; +} + +static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, + u64 err, bool prefetch, int *emulation_type) +{ + int r; + /* * Async #PF "faults", a.k.a. prefetch faults, are not faults from the * guest perspective and have already been counted at the time of the @@ -326,18 +347,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (!prefetch) vcpu->stat.pf_taken++; - if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) && fault.is_tdp) - r = kvm_tdp_page_fault(vcpu, &fault); - else - r = vcpu->arch.mmu->page_fault(vcpu, &fault); - - if (r == RET_PF_EMULATE && fault.is_private) { - kvm_mmu_prepare_memory_fault_exit(vcpu, &fault); - return -EFAULT; - } - - if (fault.write_fault_to_shadow_pgtable && emulation_type) - *emulation_type |= EMULTYPE_WRITE_PF_TO_SP; + r = __kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, err, prefetch, emulation_type); /* * Similar to above, prefetch faults aren't truly spurious, and the