From patchwork Mon Mar 25 18:51:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 13602764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E639FC54E58 for ; Mon, 25 Mar 2024 18:52:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Badp/H7pIzrEP2Yb/w4CeIdS/fRL3JgRTaZmqkjinBo=; b=RFQZdIOCysgy+p 4LBOwAYKSPSbQPrhNNLtPbUnqvIFFCEw9n+00phbhfSmhX4pN+FCslAja2FhkmH1KxHMVfDUebbif 06PY9FhFWA+w9urfRASK8a7p3ZeszYIWOtyZFlweTPgg6zko7VwXPTqTTAfPm+0zcf5b2JEy1vsXm aS5Z2k8juJ/nwlMek+LgkfpzK7AvcDvTJG1CSoAhynKa99bloeIqw1FjdxkKbuwhC2oZDDW3EeQlA zi9UfCOkDOpPBUg0bvOCLcv3QXqx3nRM0D4I2CGsJjgSRICUrbeG++ZaKx8CUaNwdmol6rPk4EGpy bv9yVi6MCjP6IAgjqFnA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ropQi-00000001RuG-3Ypd; Mon, 25 Mar 2024 18:52:20 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ropQa-00000001RqQ-1fo7 for linux-arm-kernel@lists.infradead.org; Mon, 25 Mar 2024 18:52:14 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 1138ACE1C89; Mon, 25 Mar 2024 18:52:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 99C6BC43390; Mon, 25 Mar 2024 18:52:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1711392729; bh=pp/mGSQctVF+cnNA7rY+yLh0nt0v3ISjCZuBkz0b7/c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Kfp1r3RNey85n3/rXGeFdsweLpXnsDwBT8P+kuoPDamv87azNwUfyPm8oT54Gu6ud NRr3V11e2WCQERYxb9vJ4fie+wIN95H+Gid589cRG3qclGvejQ82l0Q86YczpakUJV jCUfCsjL2gXTlbY0Qv2k5xfiwzvlBrJY72VQQuqrxeQ15Hbr/EiPfxGoY5D+ToLSej A6DptsQSaDy51gxGhwbmjJeayilvAG8/UYTM3bE+4PJDn2A4BOucf6k0w4Z5JT+3Zg QLPaSDjiAeQWDygI3e/yPrkw7PhER2nhKccFmYuYxu+lAFBQo1Gbzppa5XNU5UlfJJ limZC/M0lwlOA== From: Will Deacon To: kvmarm@lists.linux.dev Cc: linux-arm-kernel@lists.infradead.org, Will Deacon , Catalin Marinas , Gavin Shan , Marc Zyngier , Mostafa Saleh , Oliver Upton , Quentin Perret , Raghavendra Rao Ananta , Ryan Roberts , Shaoqin Huang , Suzuki K Poulose , Zenghui Yu Subject: [PATCH 1/3] KVM: arm64: Don't defer TLB invalidation when zapping table entries Date: Mon, 25 Mar 2024 18:51:56 +0000 Message-Id: <20240325185158.8565-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20240325185158.8565-1-will@kernel.org> References: <20240325185158.8565-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240325_115212_695838_28983D7A X-CRM114-Status: GOOD ( 13.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Commit 7657ea920c54 ("KVM: arm64: Use TLBI range-based instructions for unmap") introduced deferred TLB invalidation for the stage-2 page-table so that range-based invalidation can be used for the accumulated addresses. This works fine if the structure of the page-tables remains unchanged, but if entire tables are zapped and subsequently freed then we transiently leave the hardware page-table walker with a reference to freed memory thanks to the translation walk caches. For example, stage2_unmap_walker() will free page-table pages: if (childp) mm_ops->put_page(childp); and issue the TLB invalidation later in kvm_pgtable_stage2_unmap(): if (stage2_unmap_defer_tlb_flush(pgt)) /* Perform the deferred TLB invalidations */ kvm_tlb_flush_vmid_range(pgt->mmu, addr, size); For now, take the conservative approach and invalidate the TLB eagerly when we clear a table entry. Cc: Raghavendra Rao Ananta Cc: Shaoqin Huang Cc: Marc Zyngier Cc: Oliver Upton Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/pgtable.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3fae5830f8d2..de0b667ba296 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -896,9 +896,11 @@ static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx, if (kvm_pte_valid(ctx->old)) { kvm_clear_pte(ctx->ptep); - if (!stage2_unmap_defer_tlb_flush(pgt)) + if (!stage2_unmap_defer_tlb_flush(pgt) || + kvm_pte_table(ctx->old, ctx->level)) { kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + } } mm_ops->put_page(ctx->ptep);