From patchwork Fri Jun 2 16:09:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vipin Sharma X-Patchwork-Id: 13265622 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4C3DCC7EE2A for ; Fri, 2 Jun 2023 16:10:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=BgbPmYt/4nxqJ9ki/E5jNmBH3loSkaVUn/nmrC3YEGw=; b=1V7P2KFt6fmCXJub8lPcefeTIl CGUhLK3dRrqYIsc1RASty54Zyh0KwSfkxbZFPvZBHSW2gie46ED57/wk/sqCtEgmH3icDK1MsmSlc y2gbHloNm0gHbdkbrDm7F3KG89/WNtWOcZTcR/NhlxA/S1xYt4uL/wrUwN30dqOub+eEDTRgiAOBh brvyxhLZokMBQNzdN1kj3A+Vi/YrywHY4j5eFbV95VfOhoqt9yCjtDTKyQSEgiX+CpJUpOikpgu1b 78SA9BYfZ4gc/rAt2d4DXSRSowSoX59uUxBDIDGLeDCV751WNT04Tgh01HZ48QJK2CxcqpOHYF33b YBLDyE/w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q57LQ-007LkQ-33; Fri, 02 Jun 2023 16:09:40 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q57LK-007LUf-1Y for linux-arm-kernel@lists.infradead.org; Fri, 02 Jun 2023 16:09:36 +0000 Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-53f8167fb04so978245a12.0 for ; Fri, 02 Jun 2023 09:09:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685722174; x=1688314174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XlXfZDaqwtQ8jb+NBFxxHmMdNa0dhVq7sf+3J9mNCvc=; b=Cav0E0cCrA4YixnMBQKMfIUVj88wiap2lnIB8ag1Iz/8mWp6WkXueCvYRCBiVgJMpL yz6b5GmGWPhdoaqfWLr4eMh+mes1ArSi+nNH2aSvBuPWl2X06K+1JFMN+UK2I8zKDZ+Y An4/Ccw/5HBcgWIwlUsNKHK+GBxiwi6UrW0Rt5OfBwhcgGTE2dlnkWyH08h03FWRL/+p PpYax2iQC4+/Q7s8QR0zA1oOFG5toYWxuVQyxHq17uCmAv8sJvsjFT497G95S2n7vNCo An7ffox8QUfjjPO3WBaFb5yO8HO3WOQ44El3H8raJwdkFn/AASepix+It/ssVM4Hws1C WsIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685722174; x=1688314174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XlXfZDaqwtQ8jb+NBFxxHmMdNa0dhVq7sf+3J9mNCvc=; b=YLcn7iP9fDXisS2iX03VB1ByoHgyb1V68ehMi1dHczCdlzV7TV7zdA642ZuyDpYXbA w2ZZFBX6FUueKXztjcjiwHbC7wphbirfAI+TEa2/RLVA9bfxRWzIY1d1peNs840qf7sz pEKaTM+pPyaHcI+OLlx6Iz6q0+flfqhV2p3XSPKXa8WnlnrnleCOlwAud62DcgKlpMW4 7wohIMbWIS3+vWk1CvdRdZJyNtSqwaGQdSXaTuhzSNsyM5YyJp86p479Ej1sAfgPELrS k+rSaJ2lkOv16zq3uKoQiWexeQJgMF3e2JisASzHufIlwm7TVpJbqJPpTclSR8RoXRPP tmwQ== X-Gm-Message-State: AC+VfDwsV1XvfUJrEaRj0oHFHlGuK65TDsYnlWPBp4wcSojDypSj6vSu iHZpUFGjeMmyelFKprC+INAxM1y7ESKl X-Google-Smtp-Source: ACHHUZ63/E8GjVgWNJSqqa8vS+/kUt+EPMZ2lBAM1R4WdW2tBWK9Y4EulGJYzAllZOblN3IVUjTbhEd8OzIe X-Received: from vipin.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:479f]) (user=vipinsh job=sendgmr) by 2002:a63:385:0:b0:530:70cb:6da9 with SMTP id 127-20020a630385000000b0053070cb6da9mr2521810pgd.10.1685722173764; Fri, 02 Jun 2023 09:09:33 -0700 (PDT) Date: Fri, 2 Jun 2023 09:09:05 -0700 In-Reply-To: <20230602160914.4011728-1-vipinsh@google.com> Mime-Version: 1.0 References: <20230602160914.4011728-1-vipinsh@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230602160914.4011728-8-vipinsh@google.com> Subject: [PATCH v2 07/16] KVM: mmu: Move mmu lock/unlock to arch code for clear dirty log From: Vipin Sharma To: maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, tsbogend@alpha.franken.de, anup@brainfault.org, atishp@atishpatra.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, ricarkol@google.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Vipin Sharma X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230602_090934_528984_37860F7E X-CRM114-Status: GOOD ( 16.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move mmu_lock lock and unlock calls from common code in kvm_clear_dirty_log_protect() to arch specific code in kvm_arch_mmu_enable_log_dirty_pt_masked(). None of the other code inside the for loop of kvm_arch_mmu_enable_log_dirty_pt_masked() needs mmu_lock exclusivity apart from the arch specific API call. Future commits will change clear dirty log operations under mmu read lock instead of write lock for ARM and, potentially, x86 architectures. No functional changes intended. Signed-off-by: Vipin Sharma --- arch/arm64/kvm/mmu.c | 2 ++ arch/mips/kvm/mmu.c | 2 ++ arch/riscv/kvm/mmu.c | 2 ++ arch/x86/kvm/mmu/mmu.c | 3 +++ virt/kvm/dirty_ring.c | 2 -- virt/kvm/kvm_main.c | 4 ---- 6 files changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 6db9ef288ec3..0c2c2c0846f1 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1125,6 +1125,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + write_lock(&kvm->mmu_lock); lockdep_assert_held_write(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); @@ -1139,6 +1140,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, */ if (kvm_dirty_log_manual_protect_and_init_set(kvm)) kvm_mmu_split_huge_pages(kvm, start, end); + write_unlock(&kvm->mmu_lock); } static void kvm_send_hwpoison_signal(unsigned long address, short lsb) diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index e8c08988ed37..33c5af333ff9 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -419,7 +419,9 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, gfn_t start = base_gfn + __ffs(mask); gfn_t end = base_gfn + __fls(mask); + spin_lock(&kvm->mmu_lock); kvm_mips_mkclean_gpa_pt(kvm, start, end); + spin_unlock(&kvm->mmu_lock); } /* diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index f2eb47925806..fe026ff5eb65 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -399,7 +399,9 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + spin_lock(&kvm->mmu_lock); gstage_wp_range(kvm, start, end); + spin_unlock(&kvm->mmu_lock); } void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c8961f45e3b1..6fff4228e31c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1382,6 +1382,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { + write_lock(&kvm->mmu_lock); /* * Huge pages are NOT write protected when we start dirty logging in * initially-all-set mode; must write protect them here so that they @@ -1412,6 +1413,8 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, kvm_mmu_clear_dirty_pt_masked(kvm, slot, gfn_offset, mask); else kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); + + write_unlock(&kvm->mmu_lock); } int kvm_cpu_dirty_log_size(void) diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c index c1cd7dfe4a90..d894c58d2152 100644 --- a/virt/kvm/dirty_ring.c +++ b/virt/kvm/dirty_ring.c @@ -66,9 +66,7 @@ static void kvm_reset_dirty_gfn(struct kvm *kvm, u32 slot, u64 offset, u64 mask) if (!memslot || (offset + __fls(mask)) >= memslot->npages) return; - KVM_MMU_LOCK(kvm); kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot, offset, mask); - KVM_MMU_UNLOCK(kvm); } int kvm_dirty_ring_alloc(struct kvm_dirty_ring *ring, int index, u32 size) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 13aed654111a..747bfa2f1dd3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2160,7 +2160,6 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) dirty_bitmap_buffer = kvm_second_dirty_bitmap(memslot); memset(dirty_bitmap_buffer, 0, n); - KVM_MMU_LOCK(kvm); for (i = 0; i < n / sizeof(long); i++) { unsigned long mask; gfn_t offset; @@ -2176,7 +2175,6 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot, offset, mask); } - KVM_MMU_UNLOCK(kvm); } if (flush) @@ -2271,7 +2269,6 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, if (copy_from_user(dirty_bitmap_buffer, log->dirty_bitmap, n)) return -EFAULT; - KVM_MMU_LOCK(kvm); for (offset = log->first_page, i = offset / BITS_PER_LONG, n = DIV_ROUND_UP(log->num_pages, BITS_PER_LONG); n--; i++, offset += BITS_PER_LONG) { @@ -2294,7 +2291,6 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, offset, mask); } } - KVM_MMU_UNLOCK(kvm); if (flush) kvm_arch_flush_remote_tlbs_memslot(kvm, memslot);