From patchwork Wed Sep 22 01:08:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 12509263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 775F5C433EF for ; Wed, 22 Sep 2021 01:08:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4C49461184 for ; Wed, 22 Sep 2021 01:08:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231267AbhIVBK0 (ORCPT ); Tue, 21 Sep 2021 21:10:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230469AbhIVBK0 (ORCPT ); Tue, 21 Sep 2021 21:10:26 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBA0EC061574 for ; Tue, 21 Sep 2021 18:08:56 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id x28-20020ac8701c000000b0029f4b940566so5053070qtm.19 for ; Tue, 21 Sep 2021 18:08:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=sB6+nektsQKbRT8URX4ZERcQGSMsEp1UgeUOYm/SIBg=; b=ar2Q5W2awd91FclF3OwF0DOCCDogFFVcJWd2TfjXanoMzpQHx3bXvhYj/HyMq2XmxC vzRb5qOfwVYIHLyvOsCBUgBXJ1ip6DPOk7b83qI7BmgMK9abwy7v5vpV1/BZKdXeNEU4 PWQz9pRuxjU4Te/v73RIxYTX8I0/cFXhizx+Uy4sfivcDuI96aa6hFGWh4Q6/c75JQgt 9wS52e0FOpXXkaIxL1lkBqNl0jre2tJC4xuZ17sIy42CEur1Mn8M1T0HgaVVBFa8vB8m 8xVwBvAmefDog4v3LAoggKAl+4VaPrAhieU18Hln9IA27bZhmcLPHzmi2/Scsq1OUZ/b njLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=sB6+nektsQKbRT8URX4ZERcQGSMsEp1UgeUOYm/SIBg=; b=4WBA9uVsMpNyuZIEKbayRgQDOLgqTKdIL+xh/enNDuhiPMH3o8ODbH+kW3k+J7BaNk HPQp2z9D5zfE2EGFQaf8C5LSVex668FXWZfiStzgZ6W3yZpxcVAfBf1abeRuf8PL6Mwt 1ZbsI2RungU+p/fNL4vB/3pF52hW81/yS8RDMjNN5bypxhLQrGOyV6bCxEAXgUAaFqom t0nSUR6goosy4dNTWUAn4Z9pHmEPUPPwh5lNFnCGS5Y9NEXuk+07rnwApJ3HuUFYhnGe aHtUqsDEwkX7M0ih90itbf0lMiq7odp5igX5RglpsvakdaEjvbYV5//geNjxPkfH/rTu Jefw== X-Gm-Message-State: AOAM530t7knFwpotUH9/Y7VRuRpJbxe7bXpd8C3GvTnderU3ZTuOL0b5 9aJpCDCzwICcb4DUaDi6/QTo6ED6s3K9P1Oe3YKNiZHQqqF7QHgPom3nY7aMpgbnmz88WGXrKsP gxe+PMCjsiA0k57MGbGR1XgTvKnkovRwA0TtxNsqC+PaoXK1u2YyLhVePewNxjv7uvjbi6K8= X-Google-Smtp-Source: ABdhPJzOqeO2OuxpUMXXv+KHI6Xw27I1GqFhY5UPfC7OxcCkjvisPmzt0AP6yLCDqx5YjtHu+XqAy/nLZZiyOMgfQA== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a0c:aac2:: with SMTP id g2mr30995549qvb.41.1632272936004; Tue, 21 Sep 2021 18:08:56 -0700 (PDT) Date: Wed, 22 Sep 2021 01:08:49 +0000 Message-Id: <20210922010851.2312845-1-jingzhangos@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v1 1/3] KVM: arm64: Add arch specific exit reasons From: Jing Zhang To: KVM , KVMARM , Paolo Bonzini , Marc Zyngier , Will Deacon , David Matlack , Peter Shier , Oliver Upton , Sean Christopherson Cc: Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Arch specific exit reasons have been available for other architectures. Add arch specific exit reason support for ARM64, which would be used in KVM stats for monitoring VCPU status. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_emulate.h | 5 +++ arch/arm64/include/asm/kvm_host.h | 33 +++++++++++++++ arch/arm64/kvm/handle_exit.c | 62 +++++++++++++++++++++++++--- arch/arm64/kvm/mmu.c | 4 ++ arch/arm64/kvm/sys_regs.c | 6 +++ 5 files changed, 105 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index fd418955e31e..eb5ec3a479d3 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -319,6 +319,11 @@ static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu) return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_IABT_LOW; } +static inline bool kvm_vcpu_trap_is_dabt(const struct kvm_vcpu *vcpu) +{ + return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW; +} + static inline bool kvm_vcpu_trap_is_exec_fault(const struct kvm_vcpu *vcpu) { return kvm_vcpu_trap_is_iabt(vcpu) && !kvm_vcpu_abt_iss1tw(vcpu); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f8be56d5342b..0f0cea26ce32 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -278,6 +278,36 @@ struct vcpu_reset_state { bool reset; }; +enum arm_exit_reason { + ARM_EXIT_UNKNOWN, + ARM_EXIT_IRQ, + ARM_EXIT_EL1_SERROR, + ARM_EXIT_HYP_GONE, + ARM_EXIT_IL, + ARM_EXIT_WFI, + ARM_EXIT_WFE, + ARM_EXIT_CP15_32, + ARM_EXIT_CP15_64, + ARM_EXIT_CP14_32, + ARM_EXIT_CP14_LS, + ARM_EXIT_CP14_64, + ARM_EXIT_HVC32, + ARM_EXIT_SMC32, + ARM_EXIT_HVC64, + ARM_EXIT_SMC64, + ARM_EXIT_SYS64, + ARM_EXIT_SVE, + ARM_EXIT_IABT_LOW, + ARM_EXIT_DABT_LOW, + ARM_EXIT_SOFTSTP_LOW, + ARM_EXIT_WATCHPT_LOW, + ARM_EXIT_BREAKPT_LOW, + ARM_EXIT_BKPT32, + ARM_EXIT_BRK64, + ARM_EXIT_FP_ASIMD, + ARM_EXIT_PAC, +}; + struct kvm_vcpu_arch { struct kvm_cpu_context ctxt; void *sve_state; @@ -384,6 +414,9 @@ struct kvm_vcpu_arch { u64 last_steal; gpa_t base; } steal; + + /* Arch specific exit reason */ + enum arm_exit_reason exit_reason; }; /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 275a27368a04..90a47758b23d 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -49,6 +49,18 @@ static int handle_hvc(struct kvm_vcpu *vcpu) return ret; } +static int handle_hvc32(struct kvm_vcpu *vcpu) +{ + vcpu->arch.exit_reason = ARM_EXIT_HVC32; + return handle_hvc(vcpu); +} + +static int handle_hvc64(struct kvm_vcpu *vcpu) +{ + vcpu->arch.exit_reason = ARM_EXIT_HVC64; + return handle_hvc(vcpu); +} + static int handle_smc(struct kvm_vcpu *vcpu) { /* @@ -64,12 +76,25 @@ static int handle_smc(struct kvm_vcpu *vcpu) return 1; } +static int handle_smc32(struct kvm_vcpu *vcpu) +{ + vcpu->arch.exit_reason = ARM_EXIT_SMC32; + return handle_smc(vcpu); +} + +static int handle_smc64(struct kvm_vcpu *vcpu) +{ + vcpu->arch.exit_reason = ARM_EXIT_SMC64; + return handle_smc(vcpu); +} + /* * Guest access to FP/ASIMD registers are routed to this handler only * when the system doesn't support FP/ASIMD. */ static int handle_no_fpsimd(struct kvm_vcpu *vcpu) { + vcpu->arch.exit_reason = ARM_EXIT_FP_ASIMD; kvm_inject_undefined(vcpu); return 1; } @@ -91,10 +116,12 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu) if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_WFx_ISS_WFE) { trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true); vcpu->stat.wfe_exit_stat++; + vcpu->arch.exit_reason = ARM_EXIT_WFE; kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu)); } else { trace_kvm_wfx_arm64(*vcpu_pc(vcpu), false); vcpu->stat.wfi_exit_stat++; + vcpu->arch.exit_reason = ARM_EXIT_WFI; kvm_vcpu_block(vcpu); kvm_clear_request(KVM_REQ_UNHALT, vcpu); } @@ -119,12 +146,29 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu) { struct kvm_run *run = vcpu->run; u32 esr = kvm_vcpu_get_esr(vcpu); + u8 esr_ec = ESR_ELx_EC(esr); run->exit_reason = KVM_EXIT_DEBUG; run->debug.arch.hsr = esr; - if (ESR_ELx_EC(esr) == ESR_ELx_EC_WATCHPT_LOW) + switch (esr_ec) { + case ESR_ELx_EC_SOFTSTP_LOW: + vcpu->arch.exit_reason = ARM_EXIT_SOFTSTP_LOW; + break; + case ESR_ELx_EC_WATCHPT_LOW: run->debug.arch.far = vcpu->arch.fault.far_el2; + vcpu->arch.exit_reason = ARM_EXIT_WATCHPT_LOW; + break; + case ESR_ELx_EC_BREAKPT_LOW: + vcpu->arch.exit_reason = ARM_EXIT_BREAKPT_LOW; + break; + case ESR_ELx_EC_BKPT32: + vcpu->arch.exit_reason = ARM_EXIT_BKPT32; + break; + case ESR_ELx_EC_BRK64: + vcpu->arch.exit_reason = ARM_EXIT_BRK64; + break; + } return 0; } @@ -136,12 +180,14 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu) kvm_pr_unimpl("Unknown exception class: esr: %#08x -- %s\n", esr, esr_get_class_string(esr)); + vcpu->arch.exit_reason = ARM_EXIT_UNKNOWN; kvm_inject_undefined(vcpu); return 1; } static int handle_sve(struct kvm_vcpu *vcpu) { + vcpu->arch.exit_reason = ARM_EXIT_SVE; /* Until SVE is supported for guests: */ kvm_inject_undefined(vcpu); return 1; @@ -154,6 +200,7 @@ static int handle_sve(struct kvm_vcpu *vcpu) */ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu) { + vcpu->arch.exit_reason = ARM_EXIT_PAC; kvm_inject_undefined(vcpu); return 1; } @@ -166,10 +213,10 @@ static exit_handle_fn arm_exit_handlers[] = { [ESR_ELx_EC_CP14_MR] = kvm_handle_cp14_32, [ESR_ELx_EC_CP14_LS] = kvm_handle_cp14_load_store, [ESR_ELx_EC_CP14_64] = kvm_handle_cp14_64, - [ESR_ELx_EC_HVC32] = handle_hvc, - [ESR_ELx_EC_SMC32] = handle_smc, - [ESR_ELx_EC_HVC64] = handle_hvc, - [ESR_ELx_EC_SMC64] = handle_smc, + [ESR_ELx_EC_HVC32] = handle_hvc32, + [ESR_ELx_EC_SMC32] = handle_smc32, + [ESR_ELx_EC_HVC64] = handle_hvc64, + [ESR_ELx_EC_SMC64] = handle_smc64, [ESR_ELx_EC_SYS64] = kvm_handle_sys_reg, [ESR_ELx_EC_SVE] = handle_sve, [ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort, @@ -230,8 +277,10 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index) switch (exception_index) { case ARM_EXCEPTION_IRQ: + vcpu->arch.exit_reason = ARM_EXIT_IRQ; return 1; case ARM_EXCEPTION_EL1_SERROR: + vcpu->arch.exit_reason = ARM_EXIT_EL1_SERROR; return 1; case ARM_EXCEPTION_TRAP: return handle_trap_exceptions(vcpu); @@ -240,6 +289,7 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index) * EL2 has been reset to the hyp-stub. This happens when a guest * is pre-empted by kvm_reboot()'s shutdown call. */ + vcpu->arch.exit_reason = ARM_EXIT_HYP_GONE; run->exit_reason = KVM_EXIT_FAIL_ENTRY; return 0; case ARM_EXCEPTION_IL: @@ -247,11 +297,13 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index) * We attempted an illegal exception return. Guest state must * have been corrupted somehow. Give up. */ + vcpu->arch.exit_reason = ARM_EXIT_IL; run->exit_reason = KVM_EXIT_FAIL_ENTRY; return -EINVAL; default: kvm_pr_unimpl("Unsupported exception type: %d", exception_index); + vcpu->arch.exit_reason = ARM_EXIT_UNKNOWN; run->exit_reason = KVM_EXIT_INTERNAL_ERROR; return 0; } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1a94a7ca48f2..a6a18d113c98 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1197,6 +1197,10 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) fault_ipa = kvm_vcpu_get_fault_ipa(vcpu); is_iabt = kvm_vcpu_trap_is_iabt(vcpu); + if (is_iabt) + vcpu->arch.exit_reason = ARM_EXIT_IABT_LOW; + else if (kvm_vcpu_trap_is_dabt(vcpu)) + vcpu->arch.exit_reason = ARM_EXIT_DABT_LOW; /* Synchronous External Abort? */ if (kvm_vcpu_abt_issea(vcpu)) { diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1d46e185f31e..0915dfa589c7 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2158,6 +2158,7 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu) { + vcpu->arch.exit_reason = ARM_EXIT_CP14_LS; kvm_inject_undefined(vcpu); return 1; } @@ -2325,21 +2326,25 @@ static int kvm_handle_cp_32(struct kvm_vcpu *vcpu, int kvm_handle_cp15_64(struct kvm_vcpu *vcpu) { + vcpu->arch.exit_reason = ARM_EXIT_CP15_64; return kvm_handle_cp_64(vcpu, cp15_64_regs, ARRAY_SIZE(cp15_64_regs)); } int kvm_handle_cp15_32(struct kvm_vcpu *vcpu) { + vcpu->arch.exit_reason = ARM_EXIT_CP15_32; return kvm_handle_cp_32(vcpu, cp15_regs, ARRAY_SIZE(cp15_regs)); } int kvm_handle_cp14_64(struct kvm_vcpu *vcpu) { + vcpu->arch.exit_reason = ARM_EXIT_CP14_64; return kvm_handle_cp_64(vcpu, cp14_64_regs, ARRAY_SIZE(cp14_64_regs)); } int kvm_handle_cp14_32(struct kvm_vcpu *vcpu) { + vcpu->arch.exit_reason = ARM_EXIT_CP14_32; return kvm_handle_cp_32(vcpu, cp14_regs, ARRAY_SIZE(cp14_regs)); } @@ -2397,6 +2402,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu) int ret; trace_kvm_handle_sys_reg(esr); + vcpu->arch.exit_reason = ARM_EXIT_SYS64; params = esr_sys64_to_params(esr); params.regval = vcpu_get_reg(vcpu, Rt); From patchwork Wed Sep 22 01:08:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 12509265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92902C433FE for ; Wed, 22 Sep 2021 01:09:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 60E7F61184 for ; Wed, 22 Sep 2021 01:09:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232246AbhIVBK2 (ORCPT ); Tue, 21 Sep 2021 21:10:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230469AbhIVBK2 (ORCPT ); Tue, 21 Sep 2021 21:10:28 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E8E6C061574 for ; Tue, 21 Sep 2021 18:08:59 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id x17-20020a627c11000000b004479c22b9d1so725183pfc.21 for ; Tue, 21 Sep 2021 18:08:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sGW2fsCucwKAx+N9pIkCoPlm26X6U3YR3lmhdqKDVdU=; b=WbGe7UNdWma8bLShXJwo7zCE9XwEdT3kcdyp3Nt0t2vfKny7Vpd4GY5SY3XpekAVsQ +fvE2hdXCKlG7/xRt5EkyPmnmWMjownUympX+uVs6AePkZt61bFJXVoT8U20VgMvdSyv qLVz2UvarX5FSLS5sy61eDd756bwN3HGayLDfVl2y6QwcBWD6MP8sMk9PAnSFl7zCL3s JcdnL+bANIVqk+JenFjsZwpi77dWKNKBrsSGCyqkAUbMXrVzH7GPII27gqtM80QMQRVA +Uo7+4P8jSwCkJEZi3wEIlnzZ6syamSljmkJjxjx3MvESMGNaggg9AcJo/kgt9JcMBCe rqow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sGW2fsCucwKAx+N9pIkCoPlm26X6U3YR3lmhdqKDVdU=; b=Gzy+EwHQZzUbuulFF7tygYdb1wwkpXnfJv5gZ52yXwDf8lNzC0XePAdDukpmcYm6wt E2ApcI2nNUdmuxGw7Oai97gAHtk1Xa8gjZWF9k+cybrsU/t6Td2QmVO1hTAznMfandyu mUGGthJF/1bF6pPANke2kZMbpxsSOHlc+irFqnWbxGptTbHqq/0K4DdcgdbgsLwrI1nM FXDzTMdxSJA0nXfyrPPB+DA9Dfvyods/zjY7BVs+dcu682nBtJEPEJltrWiV37kBZaC0 tY53Mk5HVLIxaICrZuPl8w1GNXOxhkI5+Tc2D1Q6/EAI4Zi92HFxmS3SqJfPLEhks/vs njQQ== X-Gm-Message-State: AOAM533k9u9mZxDwYIOXs2NN0HnQ2NfA0RxA98TxhTQf5Dw4ZZm5gc8c 8IoWDdxqkE7SkDZV8uNl7u+TTMHBMc/m/h7byg3x4WdEhd6w2nvwF/7Y9Py0O1kKhztK6SFBFqR ocUbG42IlM0tQF30PH5IwE6sQqpFTPHuZ7mpQYsBJ7bFici5ffxBiGlRwbwpv9djI75cHaB8= X-Google-Smtp-Source: ABdhPJx/YywsIUtK6Hp8ZRHYKqWx/mO/ekbYhJ4nF5m/YnCsj8+sYSUQjssIreEdd1QHTiyRpqVqdw1kI1q2F0v6Rw== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:90b:100b:: with SMTP id gm11mr271591pjb.0.1632272937733; Tue, 21 Sep 2021 18:08:57 -0700 (PDT) Date: Wed, 22 Sep 2021 01:08:50 +0000 In-Reply-To: <20210922010851.2312845-1-jingzhangos@google.com> Message-Id: <20210922010851.2312845-2-jingzhangos@google.com> Mime-Version: 1.0 References: <20210922010851.2312845-1-jingzhangos@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v1 2/3] KVM: arm64: Add counter stats for arch specific exit reasons From: Jing Zhang To: KVM , KVMARM , Paolo Bonzini , Marc Zyngier , Will Deacon , David Matlack , Peter Shier , Oliver Upton , Sean Christopherson Cc: Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The exit reason stats can be used for monitoring the VCPU status. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 33 ++++++++++++++++++++++++++++--- arch/arm64/kvm/guest.c | 33 +++++++++++++++++++++++++++---- arch/arm64/kvm/handle_exit.c | 22 ++++++++++++++++++--- arch/arm64/kvm/mmu.c | 7 +++++-- arch/arm64/kvm/sys_regs.c | 6 ++++++ 5 files changed, 89 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0f0cea26ce32..4d65de22add3 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -607,13 +607,40 @@ struct kvm_vm_stat { struct kvm_vcpu_stat { struct kvm_vcpu_stat_generic generic; - u64 hvc_exit_stat; - u64 wfe_exit_stat; - u64 wfi_exit_stat; u64 mmio_exit_user; u64 mmio_exit_kernel; u64 signal_exits; u64 exits; + /* Stats for arch specific exit reasons */ + struct { + u64 exit_unknown; + u64 exit_irq; + u64 exit_el1_serror; + u64 exit_hyp_gone; + u64 exit_il; + u64 exit_wfi; + u64 exit_wfe; + u64 exit_cp15_32; + u64 exit_cp15_64; + u64 exit_cp14_32; + u64 exit_cp14_ls; + u64 exit_cp14_64; + u64 exit_hvc32; + u64 exit_smc32; + u64 exit_hvc64; + u64 exit_smc64; + u64 exit_sys64; + u64 exit_sve; + u64 exit_iabt_low; + u64 exit_dabt_low; + u64 exit_softstp_low; + u64 exit_watchpt_low; + u64 exit_breakpt_low; + u64 exit_bkpt32; + u64 exit_brk64; + u64 exit_fp_asimd; + u64 exit_pac; + }; }; int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 5ce26bedf23c..abd9327d7110 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -43,13 +43,38 @@ const struct kvm_stats_header kvm_vm_stats_header = { const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = { KVM_GENERIC_VCPU_STATS(), - STATS_DESC_COUNTER(VCPU, hvc_exit_stat), - STATS_DESC_COUNTER(VCPU, wfe_exit_stat), - STATS_DESC_COUNTER(VCPU, wfi_exit_stat), STATS_DESC_COUNTER(VCPU, mmio_exit_user), STATS_DESC_COUNTER(VCPU, mmio_exit_kernel), STATS_DESC_COUNTER(VCPU, signal_exits), - STATS_DESC_COUNTER(VCPU, exits) + STATS_DESC_COUNTER(VCPU, exits), + /* Stats for arch specific exit reasons */ + STATS_DESC_COUNTER(VCPU, exit_unknown), + STATS_DESC_COUNTER(VCPU, exit_irq), + STATS_DESC_COUNTER(VCPU, exit_el1_serror), + STATS_DESC_COUNTER(VCPU, exit_hyp_gone), + STATS_DESC_COUNTER(VCPU, exit_il), + STATS_DESC_COUNTER(VCPU, exit_wfi), + STATS_DESC_COUNTER(VCPU, exit_wfe), + STATS_DESC_COUNTER(VCPU, exit_cp15_32), + STATS_DESC_COUNTER(VCPU, exit_cp15_64), + STATS_DESC_COUNTER(VCPU, exit_cp14_32), + STATS_DESC_COUNTER(VCPU, exit_cp14_ls), + STATS_DESC_COUNTER(VCPU, exit_cp14_64), + STATS_DESC_COUNTER(VCPU, exit_hvc32), + STATS_DESC_COUNTER(VCPU, exit_smc32), + STATS_DESC_COUNTER(VCPU, exit_hvc64), + STATS_DESC_COUNTER(VCPU, exit_smc64), + STATS_DESC_COUNTER(VCPU, exit_sys64), + STATS_DESC_COUNTER(VCPU, exit_sve), + STATS_DESC_COUNTER(VCPU, exit_iabt_low), + STATS_DESC_COUNTER(VCPU, exit_dabt_low), + STATS_DESC_COUNTER(VCPU, exit_softstp_low), + STATS_DESC_COUNTER(VCPU, exit_watchpt_low), + STATS_DESC_COUNTER(VCPU, exit_breakpt_low), + STATS_DESC_COUNTER(VCPU, exit_bkpt32), + STATS_DESC_COUNTER(VCPU, exit_brk64), + STATS_DESC_COUNTER(VCPU, exit_fp_asimd), + STATS_DESC_COUNTER(VCPU, exit_pac), }; const struct kvm_stats_header kvm_vcpu_stats_header = { diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 90a47758b23d..e83cd52078b2 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -38,7 +38,6 @@ static int handle_hvc(struct kvm_vcpu *vcpu) trace_kvm_hvc_arm64(*vcpu_pc(vcpu), vcpu_get_reg(vcpu, 0), kvm_vcpu_hvc_get_imm(vcpu)); - vcpu->stat.hvc_exit_stat++; ret = kvm_hvc_call_handler(vcpu); if (ret < 0) { @@ -52,12 +51,14 @@ static int handle_hvc(struct kvm_vcpu *vcpu) static int handle_hvc32(struct kvm_vcpu *vcpu) { vcpu->arch.exit_reason = ARM_EXIT_HVC32; + ++vcpu->stat.exit_hvc32; return handle_hvc(vcpu); } static int handle_hvc64(struct kvm_vcpu *vcpu) { vcpu->arch.exit_reason = ARM_EXIT_HVC64; + ++vcpu->stat.exit_hvc64; return handle_hvc(vcpu); } @@ -79,12 +80,14 @@ static int handle_smc(struct kvm_vcpu *vcpu) static int handle_smc32(struct kvm_vcpu *vcpu) { vcpu->arch.exit_reason = ARM_EXIT_SMC32; + ++vcpu->stat.exit_smc32; return handle_smc(vcpu); } static int handle_smc64(struct kvm_vcpu *vcpu) { vcpu->arch.exit_reason = ARM_EXIT_SMC64; + ++vcpu->stat.exit_smc64; return handle_smc(vcpu); } @@ -95,6 +98,7 @@ static int handle_smc64(struct kvm_vcpu *vcpu) static int handle_no_fpsimd(struct kvm_vcpu *vcpu) { vcpu->arch.exit_reason = ARM_EXIT_FP_ASIMD; + ++vcpu->stat.exit_fp_asimd; kvm_inject_undefined(vcpu); return 1; } @@ -115,13 +119,13 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu) { if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_WFx_ISS_WFE) { trace_kvm_wfx_arm64(*vcpu_pc(vcpu), true); - vcpu->stat.wfe_exit_stat++; vcpu->arch.exit_reason = ARM_EXIT_WFE; + ++vcpu->stat.exit_wfe; kvm_vcpu_on_spin(vcpu, vcpu_mode_priv(vcpu)); } else { trace_kvm_wfx_arm64(*vcpu_pc(vcpu), false); - vcpu->stat.wfi_exit_stat++; vcpu->arch.exit_reason = ARM_EXIT_WFI; + ++vcpu->stat.exit_wfi; kvm_vcpu_block(vcpu); kvm_clear_request(KVM_REQ_UNHALT, vcpu); } @@ -154,19 +158,24 @@ static int kvm_handle_guest_debug(struct kvm_vcpu *vcpu) switch (esr_ec) { case ESR_ELx_EC_SOFTSTP_LOW: vcpu->arch.exit_reason = ARM_EXIT_SOFTSTP_LOW; + ++vcpu->stat.exit_softstp_low; break; case ESR_ELx_EC_WATCHPT_LOW: run->debug.arch.far = vcpu->arch.fault.far_el2; vcpu->arch.exit_reason = ARM_EXIT_WATCHPT_LOW; + ++vcpu->stat.exit_watchpt_low; break; case ESR_ELx_EC_BREAKPT_LOW: vcpu->arch.exit_reason = ARM_EXIT_BREAKPT_LOW; + ++vcpu->stat.exit_breakpt_low; break; case ESR_ELx_EC_BKPT32: vcpu->arch.exit_reason = ARM_EXIT_BKPT32; + ++vcpu->stat.exit_bkpt32; break; case ESR_ELx_EC_BRK64: vcpu->arch.exit_reason = ARM_EXIT_BRK64; + ++vcpu->stat.exit_brk64; break; } @@ -181,6 +190,7 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu) esr, esr_get_class_string(esr)); vcpu->arch.exit_reason = ARM_EXIT_UNKNOWN; + ++vcpu->stat.exit_unknown; kvm_inject_undefined(vcpu); return 1; } @@ -188,6 +198,7 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu) static int handle_sve(struct kvm_vcpu *vcpu) { vcpu->arch.exit_reason = ARM_EXIT_SVE; + ++vcpu->stat.exit_sve; /* Until SVE is supported for guests: */ kvm_inject_undefined(vcpu); return 1; @@ -201,6 +212,7 @@ static int handle_sve(struct kvm_vcpu *vcpu) static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu) { vcpu->arch.exit_reason = ARM_EXIT_PAC; + ++vcpu->stat.exit_pac; kvm_inject_undefined(vcpu); return 1; } @@ -278,9 +290,11 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index) switch (exception_index) { case ARM_EXCEPTION_IRQ: vcpu->arch.exit_reason = ARM_EXIT_IRQ; + ++vcpu->stat.exit_irq; return 1; case ARM_EXCEPTION_EL1_SERROR: vcpu->arch.exit_reason = ARM_EXIT_EL1_SERROR; + ++vcpu->stat.exit_el1_serror; return 1; case ARM_EXCEPTION_TRAP: return handle_trap_exceptions(vcpu); @@ -291,6 +305,7 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index) */ vcpu->arch.exit_reason = ARM_EXIT_HYP_GONE; run->exit_reason = KVM_EXIT_FAIL_ENTRY; + ++vcpu->stat.exit_hyp_gone; return 0; case ARM_EXCEPTION_IL: /* @@ -299,6 +314,7 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index) */ vcpu->arch.exit_reason = ARM_EXIT_IL; run->exit_reason = KVM_EXIT_FAIL_ENTRY; + ++vcpu->stat.exit_il; return -EINVAL; default: kvm_pr_unimpl("Unsupported exception type: %d", diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index a6a18d113c98..799c756dd9f5 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1197,10 +1197,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) fault_ipa = kvm_vcpu_get_fault_ipa(vcpu); is_iabt = kvm_vcpu_trap_is_iabt(vcpu); - if (is_iabt) + if (is_iabt) { vcpu->arch.exit_reason = ARM_EXIT_IABT_LOW; - else if (kvm_vcpu_trap_is_dabt(vcpu)) + ++vcpu->stat.exit_iabt_low; + } else if (kvm_vcpu_trap_is_dabt(vcpu)) { vcpu->arch.exit_reason = ARM_EXIT_DABT_LOW; + ++vcpu->stat.exit_dabt_low; + } /* Synchronous External Abort? */ if (kvm_vcpu_abt_issea(vcpu)) { diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 0915dfa589c7..344a6ff26bf6 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2159,6 +2159,7 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n, int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu) { vcpu->arch.exit_reason = ARM_EXIT_CP14_LS; + ++vcpu->stat.exit_cp14_ls; kvm_inject_undefined(vcpu); return 1; } @@ -2327,24 +2328,28 @@ static int kvm_handle_cp_32(struct kvm_vcpu *vcpu, int kvm_handle_cp15_64(struct kvm_vcpu *vcpu) { vcpu->arch.exit_reason = ARM_EXIT_CP15_64; + ++vcpu->stat.exit_cp15_64; return kvm_handle_cp_64(vcpu, cp15_64_regs, ARRAY_SIZE(cp15_64_regs)); } int kvm_handle_cp15_32(struct kvm_vcpu *vcpu) { vcpu->arch.exit_reason = ARM_EXIT_CP15_32; + ++vcpu->stat.exit_cp15_32; return kvm_handle_cp_32(vcpu, cp15_regs, ARRAY_SIZE(cp15_regs)); } int kvm_handle_cp14_64(struct kvm_vcpu *vcpu) { vcpu->arch.exit_reason = ARM_EXIT_CP14_64; + ++vcpu->stat.exit_cp14_64; return kvm_handle_cp_64(vcpu, cp14_64_regs, ARRAY_SIZE(cp14_64_regs)); } int kvm_handle_cp14_32(struct kvm_vcpu *vcpu) { vcpu->arch.exit_reason = ARM_EXIT_CP14_32; + ++vcpu->stat.exit_cp14_32; return kvm_handle_cp_32(vcpu, cp14_regs, ARRAY_SIZE(cp14_regs)); } @@ -2403,6 +2408,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu) trace_kvm_handle_sys_reg(esr); vcpu->arch.exit_reason = ARM_EXIT_SYS64; + ++vcpu->stat.exit_sys64; params = esr_sys64_to_params(esr); params.regval = vcpu_get_reg(vcpu, Rt); From patchwork Wed Sep 22 01:08:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 12509267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E85D4C433F5 for ; Wed, 22 Sep 2021 01:09:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C2B1260F4B for ; Wed, 22 Sep 2021 01:09:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232271AbhIVBKa (ORCPT ); Tue, 21 Sep 2021 21:10:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231271AbhIVBK3 (ORCPT ); Tue, 21 Sep 2021 21:10:29 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74BA0C061574 for ; Tue, 21 Sep 2021 18:09:00 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id e5-20020ac84905000000b002a69dc43859so5729770qtq.10 for ; Tue, 21 Sep 2021 18:09:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=f+eE05PCo8qChOi7C3TCse89XUY8p/e5JYZmdwEBrZo=; b=dJPqGgERbK013/94/AiM4dAyiKix1/DKn7frupuhal/2UEb+36tkSAU2QR5HuvuUrP I5lfIfz9mix3ljONjQSJPZsqUxztzw7TWpmga75sAbCpjGjAK5UHn8GmdunRx04JBBEa OSPrE6uEN4QUVW+aVCib2A81IVnBp5IFyxqGxm6PvXA4MAE2YJQNDMR3/msn9XXHFFW6 9X8ORVIypjoWIBFobLZl9reRnKzG6nKCZzWycg2LhY+60z/e3pfcSmkqhxYxDAaDxsvp 2eiyr+v4RXb88q/WrE74qS+8EHkrtgmikQdbp6ikP1p8w2pEsPV13MN/vtenr9P9f0qK EdAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=f+eE05PCo8qChOi7C3TCse89XUY8p/e5JYZmdwEBrZo=; b=SeFn+ko931h35liS0ywB6x9yOYN263eO2sKn8p/Ae+tj45sj8FBKbh/StQHKGyoWfu dccUp96jDZgUpGV2YcfmTWAI7ESEeWHferlUi3BNdCNKbTtFKRVxkWI3O3N6k7hcpi4A xQLelHMGVRRMeBv0DMAdBDsLTzpB60Z7xSwh12g0SLIBKclFOTBMCyhd0MRaMegHwx/8 Q7+M4Au1PZwt0dj7T0wM3OB7TiHKsSbyE6uwZVXEsj4scTLkCULQCFyHGaDdBFK1cjIH BDQ0DGcfJj0sKe9tQvKA5Dfj294mALwU46A8PXchYXRZzDfEDKEMp0UZFiNc0RGYOX5b TzEg== X-Gm-Message-State: AOAM531O0bgtUGPv/jt5naXqLnGfa3Y5dmBfjiWzI9cF//kxqiDGNfl5 CoMnXajj8ErOjc3PzKJvlpRGxZcpkc4ozQXmJhwHTqbkdZOH3pLCPxIFVEDwqW3tMIF1KjPemse s0oJNQNp2xMpmiDQ9vSgQuv2s+pb2BVI3ptUdKwp9W4Erzsgqmc7X9GRHL6bB0Xz9SWwVipw= X-Google-Smtp-Source: ABdhPJy3KpjeASdvmJujfjGe+YN1WNhBl+SU4AAgG4BY/Um41MpzopZrZLsD49XgUs0vHx6qrAvH2+FIjjqSM5Xoxw== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a0c:e381:: with SMTP id a1mr29432178qvl.42.1632272939620; Tue, 21 Sep 2021 18:08:59 -0700 (PDT) Date: Wed, 22 Sep 2021 01:08:51 +0000 In-Reply-To: <20210922010851.2312845-1-jingzhangos@google.com> Message-Id: <20210922010851.2312845-3-jingzhangos@google.com> Mime-Version: 1.0 References: <20210922010851.2312845-1-jingzhangos@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v1 3/3] KVM: arm64: Add histogram stats for handling time of arch specific exit reasons From: Jing Zhang To: KVM , KVMARM , Paolo Bonzini , Marc Zyngier , Will Deacon , David Matlack , Peter Shier , Oliver Upton , Sean Christopherson Cc: Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org These logarithmic histogram stats are useful for monitoring performance of handling for different kinds of VCPU exit reasons. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 36 ++++++++++++ arch/arm64/kvm/arm.c | 4 ++ arch/arm64/kvm/guest.c | 43 ++++++++++++++ arch/arm64/kvm/handle_exit.c | 95 +++++++++++++++++++++++++++++++ 4 files changed, 178 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4d65de22add3..f1a29ca3d4f3 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -417,6 +417,9 @@ struct kvm_vcpu_arch { /* Arch specific exit reason */ enum arm_exit_reason exit_reason; + + /* The timestamp for the last VCPU exit */ + u64 last_exit_time; }; /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ @@ -605,6 +608,8 @@ struct kvm_vm_stat { struct kvm_vm_stat_generic generic; }; +#define ARM_EXIT_HIST_CNT 64 + struct kvm_vcpu_stat { struct kvm_vcpu_stat_generic generic; u64 mmio_exit_user; @@ -641,6 +646,36 @@ struct kvm_vcpu_stat { u64 exit_fp_asimd; u64 exit_pac; }; + /* Histogram stats for handling time of arch specific exit reasons */ + struct { + u64 exit_unknown_hist[ARM_EXIT_HIST_CNT]; + u64 exit_irq_hist[ARM_EXIT_HIST_CNT]; + u64 exit_el1_serror_hist[ARM_EXIT_HIST_CNT]; + u64 exit_hyp_gone_hist[ARM_EXIT_HIST_CNT]; + u64 exit_il_hist[ARM_EXIT_HIST_CNT]; + u64 exit_wfi_hist[ARM_EXIT_HIST_CNT]; + u64 exit_wfe_hist[ARM_EXIT_HIST_CNT]; + u64 exit_cp15_32_hist[ARM_EXIT_HIST_CNT]; + u64 exit_cp15_64_hist[ARM_EXIT_HIST_CNT]; + u64 exit_cp14_32_hist[ARM_EXIT_HIST_CNT]; + u64 exit_cp14_ls_hist[ARM_EXIT_HIST_CNT]; + u64 exit_cp14_64_hist[ARM_EXIT_HIST_CNT]; + u64 exit_hvc32_hist[ARM_EXIT_HIST_CNT]; + u64 exit_smc32_hist[ARM_EXIT_HIST_CNT]; + u64 exit_hvc64_hist[ARM_EXIT_HIST_CNT]; + u64 exit_smc64_hist[ARM_EXIT_HIST_CNT]; + u64 exit_sys64_hist[ARM_EXIT_HIST_CNT]; + u64 exit_sve_hist[ARM_EXIT_HIST_CNT]; + u64 exit_iabt_low_hist[ARM_EXIT_HIST_CNT]; + u64 exit_dabt_low_hist[ARM_EXIT_HIST_CNT]; + u64 exit_softstp_low_hist[ARM_EXIT_HIST_CNT]; + u64 exit_watchpt_low_hist[ARM_EXIT_HIST_CNT]; + u64 exit_breakpt_low_hist[ARM_EXIT_HIST_CNT]; + u64 exit_bkpt32_hist[ARM_EXIT_HIST_CNT]; + u64 exit_brk64_hist[ARM_EXIT_HIST_CNT]; + u64 exit_fp_asimd_hist[ARM_EXIT_HIST_CNT]; + u64 exit_pac_hist[ARM_EXIT_HIST_CNT]; + }; }; int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init); @@ -715,6 +750,7 @@ void force_vm_exit(const cpumask_t *mask); int handle_exit(struct kvm_vcpu *vcpu, int exception_index); void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index); +void update_hist_exit_stats(struct kvm_vcpu *vcpu); int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu); int kvm_handle_cp14_32(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index fe102cd2e518..156f80b699d3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -795,6 +795,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) ret = 1; run->exit_reason = KVM_EXIT_UNKNOWN; while (ret > 0) { + /* Update histogram stats for exit reasons */ + update_hist_exit_stats(vcpu); + /* * Check conditions before entering the guest */ @@ -903,6 +906,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) */ guest_exit(); trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); + vcpu->arch.last_exit_time = ktime_to_ns(ktime_get()); /* Exit types that need handling before we can be preempted */ handle_exit_early(vcpu, ret); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index abd9327d7110..bbf51578fdec 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -75,6 +75,49 @@ const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = { STATS_DESC_COUNTER(VCPU, exit_brk64), STATS_DESC_COUNTER(VCPU, exit_fp_asimd), STATS_DESC_COUNTER(VCPU, exit_pac), + /* Histogram stats for handling time of arch specific exit reasons */ + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_unknown_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_irq_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_el1_serror_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_hyp_gone_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC(VCPU, exit_il_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC(VCPU, exit_wfi_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC(VCPU, exit_wfe_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_cp15_32_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_cp15_64_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_cp14_32_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_cp14_ls_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_cp14_64_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC(VCPU, exit_hvc32_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC(VCPU, exit_smc32_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC(VCPU, exit_hvc64_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC(VCPU, exit_smc64_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC(VCPU, exit_sys64_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC(VCPU, exit_sve_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_iabt_low_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_dabt_low_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_softstp_low_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_watchpt_low_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_breakpt_low_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC(VCPU, exit_bkpt32_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC(VCPU, exit_brk64_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC( + VCPU, exit_fp_asimd_hist, ARM_EXIT_HIST_CNT), + STATS_DESC_LOGHIST_TIME_NSEC(VCPU, exit_pac_hist, ARM_EXIT_HIST_CNT), }; const struct kvm_stats_header kvm_vcpu_stats_header = { diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index e83cd52078b2..5e642a6275c1 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -395,3 +395,98 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%016lx\n", spsr, elr_virt, esr, far, hpfar, par, vcpu); } + +void update_hist_exit_stats(struct kvm_vcpu *vcpu) +{ + u64 val = ktime_to_ns(ktime_get()) - vcpu->arch.last_exit_time; + + if (unlikely(!vcpu->arch.last_exit_time)) + return; + + switch (vcpu->arch.exit_reason) { + case ARM_EXIT_UNKNOWN: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_unknown_hist, val); + break; + case ARM_EXIT_IRQ: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_irq_hist, val); + break; + case ARM_EXIT_EL1_SERROR: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_el1_serror_hist, val); + break; + case ARM_EXIT_HYP_GONE: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_hyp_gone_hist, val); + break; + case ARM_EXIT_IL: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_il_hist, val); + break; + case ARM_EXIT_WFI: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_wfi_hist, val); + break; + case ARM_EXIT_WFE: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_wfe_hist, val); + break; + case ARM_EXIT_CP15_32: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_cp15_32_hist, val); + break; + case ARM_EXIT_CP15_64: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_cp15_64_hist, val); + break; + case ARM_EXIT_CP14_32: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_cp14_32_hist, val); + break; + case ARM_EXIT_CP14_LS: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_cp14_ls_hist, val); + break; + case ARM_EXIT_CP14_64: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_cp14_64_hist, val); + break; + case ARM_EXIT_HVC32: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_hvc32_hist, val); + break; + case ARM_EXIT_SMC32: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_smc32_hist, val); + break; + case ARM_EXIT_HVC64: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_hvc64_hist, val); + break; + case ARM_EXIT_SMC64: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_smc64_hist, val); + break; + case ARM_EXIT_SYS64: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_sys64_hist, val); + break; + case ARM_EXIT_SVE: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_sve_hist, val); + break; + case ARM_EXIT_IABT_LOW: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_iabt_low_hist, val); + break; + case ARM_EXIT_DABT_LOW: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_dabt_low_hist, val); + break; + case ARM_EXIT_SOFTSTP_LOW: + KVM_STATS_LOG_HIST_UPDATE( + vcpu->stat.exit_softstp_low_hist, val); + break; + case ARM_EXIT_WATCHPT_LOW: + KVM_STATS_LOG_HIST_UPDATE( + vcpu->stat.exit_watchpt_low_hist, val); + break; + case ARM_EXIT_BREAKPT_LOW: + KVM_STATS_LOG_HIST_UPDATE( + vcpu->stat.exit_breakpt_low_hist, val); + break; + case ARM_EXIT_BKPT32: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_bkpt32_hist, val); + break; + case ARM_EXIT_BRK64: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_brk64_hist, val); + break; + case ARM_EXIT_FP_ASIMD: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_fp_asimd_hist, val); + break; + case ARM_EXIT_PAC: + KVM_STATS_LOG_HIST_UPDATE(vcpu->stat.exit_pac_hist, val); + break; + } +}