diff mbox series

RISC-V: KVM: Simplify kvm_arch_prepare_memory_region()

Message ID c5e918630ba37273d7b0f4e4dbb6f90d4c2f321d.1668347565.git.christophe.jaillet@wanadoo.fr (mailing list archive)
State Not Applicable
Headers show
Series RISC-V: KVM: Simplify kvm_arch_prepare_memory_region() | expand

Checks

Context Check Description
conchuod/patch_count success Link
conchuod/cover_letter success Single patches do not need cover letters
conchuod/tree_selection success Guessed tree name to be for-next
conchuod/fixes_present success Fixes tag not required for -next series
conchuod/verify_signedoff success Signed-off-by tag matches author and committer
conchuod/kdoc success Errors and warnings before: 0 this patch: 0
conchuod/module_param success Was 0 now: 0
conchuod/build_rv32_defconfig success Build OK
conchuod/build_warn_rv64 success Errors and warnings before: 0 this patch: 0
conchuod/dtb_warn_rv64 success Errors and warnings before: 0 this patch: 0
conchuod/header_inline success No static functions without inline keyword in header files
conchuod/checkpatch success total: 0 errors, 0 warnings, 0 checks, 11 lines checked
conchuod/source_inline success Was 0 now: 0
conchuod/build_rv64_nommu_k210_defconfig success Build OK
conchuod/verify_fixes success No Fixes tag
conchuod/build_rv64_nommu_virt_defconfig success Build OK

Commit Message

Christophe JAILLET Nov. 13, 2022, 1:52 p.m. UTC
In kvm_arch_prepare_memory_region(), if no error occurs, a spin_lock()/
spin_unlock() call can be avoided.

Switch to kvm_riscv_gstage_iounmap() that is the same as the current code,
but with a better semantic.
It also embeds the locking logic. So it is avoided if ret == 0.

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
---
I don't use cross-compiler, so this patch is NOT even compile tested.
---
 arch/riscv/kvm/mmu.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

Comments

Anup Patel Nov. 27, 2022, 12:37 p.m. UTC | #1
On Sun, Nov 13, 2022 at 7:22 PM Christophe JAILLET
<christophe.jaillet@wanadoo.fr> wrote:
>
> In kvm_arch_prepare_memory_region(), if no error occurs, a spin_lock()/
> spin_unlock() call can be avoided.
>
> Switch to kvm_riscv_gstage_iounmap() that is the same as the current code,
> but with a better semantic.
> It also embeds the locking logic. So it is avoided if ret == 0.
>
> Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>

Looks good to me.

Reviewed-by: Anup Patel <anup@brainfault.org>

I have tested this on QEMU virt machine for RV64.

Queued this patch for Linux-6.2

Thanks,
Anup

> ---
> I don't use cross-compiler, so this patch is NOT even compile tested.
> ---
>  arch/riscv/kvm/mmu.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index 3620ecac2fa1..c8834e463763 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -537,10 +537,8 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>         if (change == KVM_MR_FLAGS_ONLY)
>                 goto out;
>
> -       spin_lock(&kvm->mmu_lock);
>         if (ret)
> -               gstage_unmap_range(kvm, base_gpa, size, false);
> -       spin_unlock(&kvm->mmu_lock);
> +               kvm_riscv_gstage_iounmap(kvm, base_gpa, size);
>
>  out:
>         mmap_read_unlock(current->mm);
> --
> 2.34.1
>
diff mbox series

Patch

diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index 3620ecac2fa1..c8834e463763 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -537,10 +537,8 @@  int kvm_arch_prepare_memory_region(struct kvm *kvm,
 	if (change == KVM_MR_FLAGS_ONLY)
 		goto out;
 
-	spin_lock(&kvm->mmu_lock);
 	if (ret)
-		gstage_unmap_range(kvm, base_gpa, size, false);
-	spin_unlock(&kvm->mmu_lock);
+		kvm_riscv_gstage_iounmap(kvm, base_gpa, size);
 
 out:
 	mmap_read_unlock(current->mm);