diff mbox series

[1/3] KVM: arm64: Don't free memcache pages in kvm_phys_addr_ioremap()

Message ID 20200723110227.16001-2-will@kernel.org (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: Clean up memcache usage for page-table pages | expand

Commit Message

Will Deacon July 23, 2020, 11:02 a.m. UTC
kvm_phys_addr_ioremap() unconditionally empties out the memcache pages
for the current vCPU on return. This causes subsequent topups to allocate
fresh pages and is at odds with the behaviour when mapping memory in
user_mem_abort().

Remove the call to mmu_free_memory_cache() from kvm_phys_addr_ioremap(),
allowing the cached pages to be used by a later mapping.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/mmu.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)
diff mbox series

Patch

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 31058e6e7c2a..9102373a9744 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1484,19 +1484,17 @@  int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
 					     kvm_mmu_cache_min_pages(kvm),
 					     KVM_NR_MEM_OBJS);
 		if (ret)
-			goto out;
+			break;
 		spin_lock(&kvm->mmu_lock);
 		ret = stage2_set_pte(kvm, &cache, addr, &pte,
 						KVM_S2PTE_FLAG_IS_IOMAP);
 		spin_unlock(&kvm->mmu_lock);
 		if (ret)
-			goto out;
+			break;
 
 		pfn++;
 	}
 
-out:
-	mmu_free_memory_cache(&cache);
 	return ret;
 }