From patchwork Wed Mar 27 12:48:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 13606538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1CF9BC54E67 for ; Wed, 27 Mar 2024 12:49:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1BwgTrQF6bFFhClPLJ2eSjlKOYfgQcj5InSYr8qf1BY=; b=SYHdtFaDxLqYBz 0oOXCAtVam9z9G6NxC8TyyhPid2ZPb3gBIIrqGr1mK2P/peqXIvzXmMU2hbctsUz2gaTfhm/yg18h Le+7iet/J8+otjgfpSCuLYSuU1udMHXCFBN+ArSSovChsr3dYlcMIsNflQvT3X+Pb8mkUpYAZyxx/ roVT4ANh/XchgxqvF9LuOg+2oQGrWeOIW5gI3PHE1MuJp4qqkpDjrlHPSEykxQ9bqS0VWQLDZhvD0 c5DDVKJI/xa6RUYW8eGAT8w9p9E02FQuQmc6UfcXHdB634MXjsqV+JzHJuxAp1H1yHnGOkGvBGqKc 9/xPFaT226eKREW5ILmQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpSiO-00000008znV-0shw; Wed, 27 Mar 2024 12:49:12 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpSiG-00000008zjS-0e5A for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 12:49:05 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 996DB61503; Wed, 27 Mar 2024 12:49:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F41FC43399; Wed, 27 Mar 2024 12:48:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1711543741; bh=gzOFJmJsseLiXdBw91aKhD2nQfl7xlgBvKD45uJaxL0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kzhvURJn5vap1L6jRO7DUBAGBkuFGggxrSmz3yjm9A90s6oZOUsPM9lOnMZKa4t+k 7tzdI1dFq1jzmoYZEH5sau+s7YT+dns/jZYnWV8SShpl38VfdUVxF/jbMXei7cZwUz D7SKC8QKMCsd98isWoyT3hnBtJ8mQSCOl1Uh56Exz8Ufls2pI1JtJ4o8PiDBRwbeZV +vvRWBaTaqIdCHbsMPXDvTVQBxfsm1zmPDgCDo7rB4cJtolYl9v5puT1/DvDcmr9X6 /X7hm3pyFPyyu2/Tufj9QbhH+pJLUkhFEk3i949UmO/LlpZaLy6Rr0Q0glwMgfd1yx VLGyVlbGjGUUw== From: Will Deacon To: kvmarm@lists.linux.dev Cc: linux-arm-kernel@lists.infradead.org, Will Deacon , Catalin Marinas , Gavin Shan , Marc Zyngier , Mostafa Saleh , Oliver Upton , Quentin Perret , Raghavendra Rao Ananta , Ryan Roberts , Shaoqin Huang Subject: [PATCH v2 1/4] KVM: arm64: Don't defer TLB invalidation when zapping table entries Date: Wed, 27 Mar 2024 12:48:50 +0000 Message-Id: <20240327124853.11206-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20240327124853.11206-1-will@kernel.org> References: <20240327124853.11206-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_054904_426757_028B57FC X-CRM114-Status: GOOD ( 13.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Commit 7657ea920c54 ("KVM: arm64: Use TLBI range-based instructions for unmap") introduced deferred TLB invalidation for the stage-2 page-table so that range-based invalidation can be used for the accumulated addresses. This works fine if the structure of the page-tables remains unchanged, but if entire tables are zapped and subsequently freed then we transiently leave the hardware page-table walker with a reference to freed memory thanks to the translation walk caches. For example, stage2_unmap_walker() will free page-table pages: if (childp) mm_ops->put_page(childp); and issue the TLB invalidation later in kvm_pgtable_stage2_unmap(): if (stage2_unmap_defer_tlb_flush(pgt)) /* Perform the deferred TLB invalidations */ kvm_tlb_flush_vmid_range(pgt->mmu, addr, size); For now, take the conservative approach and invalidate the TLB eagerly when we clear a table entry. Note, however, that the existing level hint passed to __kvm_tlb_flush_vmid_ipa() is incorrect and will be fixed in a subsequent patch. Cc: Raghavendra Rao Ananta Cc: Shaoqin Huang Cc: Marc Zyngier Cc: Oliver Upton Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/pgtable.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3fae5830f8d2..de0b667ba296 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -896,9 +896,11 @@ static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx, if (kvm_pte_valid(ctx->old)) { kvm_clear_pte(ctx->ptep); - if (!stage2_unmap_defer_tlb_flush(pgt)) + if (!stage2_unmap_defer_tlb_flush(pgt) || + kvm_pte_table(ctx->old, ctx->level)) { kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + } } mm_ops->put_page(ctx->ptep);