From patchwork Thu Sep 24 11:07:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 11796989 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA20D6CB for ; Thu, 24 Sep 2020 11:08:02 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 83D5220702 for ; Thu, 24 Sep 2020 11:08:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="lJmnKRzv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 83D5220702 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UGf5xe17flgPG/3KwiJ9QsOOzOO5zIXUMUWW4WviGWk=; b=lJmnKRzvCmCSbSMQLc/W3bviZ jk+73Sc8QSW8xDM0RtI+X7qNNBOpbXLZiFgJIFeRupmbaXUReSOwbyBGHFqBH1ypmzdNke1A03PA4 FmkNeJd8wX/iJ99G/gYX+/IhCAvWKnwpz07hs6iXCKpwNeJ42utfMGpfhYFTfdmAcACVBe7X4MA88 XrOBZVcb6OKo1l4kCkYPfaRQGRzGQ3r1GCz+mvKOXp/pGKjt0wytz7qhxrD/G+nm56amvoMuXjMvg jbCHTfmwHB3zMbBmy0+hLBYKf7KyMIMKywVtJ784CL+f+rVIZaDjBqnae4MTP0dTV91++zU/M+zru Dc4xSl7ww==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kLP6G-0000yi-Oc; Thu, 24 Sep 2020 11:07:44 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kLP4x-0000W1-Q6 for linux-arm-kernel@lists.infradead.org; Thu, 24 Sep 2020 11:06:26 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1AB9E152B; Thu, 24 Sep 2020 04:06:23 -0700 (PDT) Received: from monolith.localdoman (unknown [10.37.8.98]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9A0723F73B; Thu, 24 Sep 2020 04:06:20 -0700 (PDT) From: Alexandru Elisei To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 5/7] KVM: arm64: pmu: Make overflow handler NMI safe Date: Thu, 24 Sep 2020 12:07:04 +0100 Message-Id: <20200924110706.254996-6-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200924110706.254996-1-alexandru.elisei@arm.com> References: <20200924110706.254996-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200924_070623_983986_D3F7ACEE X-CRM114-Status: GOOD ( 21.64 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [217.140.110.172 listed in list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, sumit.garg@linaro.org, kvm@vger.kernel.org, Julien Thierry , Marc Zyngier , catalin.marinas@arm.com, Suzuki K Pouloze , Will Deacon , swboyd@chromium.org, James Morse , maz@kernel.org, will@kernel.org, kvmarm@lists.cs.columbia.edu, Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Julien Thierry kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from NMI context, defer waking the vcpu to an irq_work queue. A vcpu can be freed while it's not running by kvm_destroy_vm(). Prevent running the irq_work for a non-existent vcpu by calling irq_work_sync() on the PMU destroy path. Cc: Julien Thierry Cc: Marc Zyngier Cc: Will Deacon Cc: Mark Rutland Cc: Catalin Marinas Cc: James Morse Cc: Suzuki K Pouloze Cc: kvm@vger.kernel.org Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: Julien Thierry Tested-by: Sumit Garg (Developerbox) [Alexandru E.: Added irq_work_sync()] Signed-off-by: Alexandru Elisei Reviewed-by: Marc Zyngier --- I suggested in v6 that I will add an irq_work_sync() to kvm_pmu_vcpu_reset(). It turns out it's not necessary: a vcpu reset is done by the vcpu being reset with interrupts enabled, which means all the work has had a chance to run before the reset takes place. arch/arm64/kvm/pmu-emul.c | 26 +++++++++++++++++++++++++- include/kvm/arm_pmu.h | 1 + 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index f0d0312c0a55..81916e360b1e 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -269,6 +269,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) kvm_pmu_release_perf_event(&pmu->pmc[i]); + irq_work_sync(&vcpu->arch.pmu.overflow_work); } u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) @@ -433,6 +434,22 @@ void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) kvm_pmu_update_state(vcpu); } +/** + * When perf interrupt is an NMI, we cannot safely notify the vcpu corresponding + * to the event. + * This is why we need a callback to do it once outside of the NMI context. + */ +static void kvm_pmu_perf_overflow_notify_vcpu(struct irq_work *work) +{ + struct kvm_vcpu *vcpu; + struct kvm_pmu *pmu; + + pmu = container_of(work, struct kvm_pmu, overflow_work); + vcpu = kvm_pmc_to_vcpu(pmu->pmc); + + kvm_vcpu_kick(vcpu); +} + /** * When the perf event overflows, set the overflow status and inform the vcpu. */ @@ -465,7 +482,11 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event, if (kvm_pmu_overflow_status(vcpu)) { kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); - kvm_vcpu_kick(vcpu); + + if (!in_nmi()) + kvm_vcpu_kick(vcpu); + else + irq_work_queue(&vcpu->arch.pmu.overflow_work); } cpu_pmu->pmu.start(perf_event, PERF_EF_RELOAD); @@ -764,6 +785,9 @@ static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu) return ret; } + init_irq_work(&vcpu->arch.pmu.overflow_work, + kvm_pmu_perf_overflow_notify_vcpu); + vcpu->arch.pmu.created = true; return 0; } diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h index 6db030439e29..dbf4f08d42e5 100644 --- a/include/kvm/arm_pmu.h +++ b/include/kvm/arm_pmu.h @@ -27,6 +27,7 @@ struct kvm_pmu { bool ready; bool created; bool irq_level; + struct irq_work overflow_work; }; #define kvm_arm_pmu_v3_ready(v) ((v)->arch.pmu.ready)