From patchwork Fri Apr 15 21:59:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oliver Upton X-Patchwork-Id: 12815477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6FDF5C433EF for ; Fri, 15 Apr 2022 22:08:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lRatxQllHOqeyb5oJoaM0Gntex1X+zS0kgC19qjnM5M=; b=FiJwVohgTl1dh6befLhR+tq769 GJv4qzItzRhr+dJXL2uo8e3LLDOcvYlU85MALbWxs8j/nqXr3l1A43KE+57XyUSEq6QwFSQel/ThN /KpsefoF/hJnD0gLm42cyuIUKicUoWaNVrZFHGG4kJpfNJg3RHP1R77sC9C3BrtAULZAV9uDkRmZF 35nbya3qiEyrUihQvWOLM4TsXtKvR2WFj+5Dygt7Y3TGRM35vB4fnge/DhdZ/46g1QXQ4aqoAKhMB md0QK4yzNc3X4p8p2UefX96dcH1o2IKJGZcTeIeFFKHA7L1d/NhxuLzMpCLRi+A1Jqhy8letkrgo+ 9lM9imSw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nfU5m-00BVNY-Ot; Fri, 15 Apr 2022 22:07:03 +0000 Received: from mail-io1-xd49.google.com ([2607:f8b0:4864:20::d49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nfTyO-00BS7b-R4 for linux-arm-kernel@lists.infradead.org; Fri, 15 Apr 2022 21:59:26 +0000 Received: by mail-io1-xd49.google.com with SMTP id a9-20020a5d89c9000000b0064cb68a9ba6so5444794iot.11 for ; Fri, 15 Apr 2022 14:59:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=we5tUcZWUFX29TKi0tS/Yi5sEjxnPk5PaYrhMJdNw1U=; b=O1SErvMM/5NizynFN7dxIWF/wAOfb8Mq9ZfTSGeW0QBIGQpcAPGMZiYPK5b+AIE+yj 08zA/OxjxYQfyzgWCjGX98P8LIEuoV19f+IbszwnCQUQtevPI3XYw1re2k4fZZe6/vAf Qpexkx6ehQQAsh1RxGlFf+N5b4EPHjVdZkItOeRiv/yIkybMfQ6rKCCTjah2MjXE8ysA UfJUeFHEmDdjsqYhKAbeH01CdWClTD/JQdzWn2NRgu/MbALmGjnCzElEQCYlDtSQfUul EPbBWfGohELG4mrVx0BVrh06dZ/wmMGcdkJN6MSYmZEdZM8jKw5KOVWNj7WfK3JyNrde ComQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=we5tUcZWUFX29TKi0tS/Yi5sEjxnPk5PaYrhMJdNw1U=; b=seTYcuqJY/WVBHsMng9VhQPA7iU9Oa+7Nmmu01oUT2E/KRe9ZrE2gAd2VmkY92YZww JIxIlJLNHuQJPBarrqhhmPudXs7bsk+nbrkJhOXpN+xWE+xgPG6NxWwwiD85wcsXh2lh A3OHKkRxxDkGrJrLPtyTfGW4YAFZ3BH19lRoGjgri9L6+jUcfw2TITOtdRhJs8NTuPCa /zd97QVLqlesbdoMO20KKIWpPWocrRtVXeW8tZ9OUM8Y0/2szY2HWCEjOB+Mef2SZatX aphiq6ZVdc53t1XzGAEZ+3uTA0vdQ/HYP4paQIDU/ObPVMSNXiuY58E7HPGWHCBuZ4zm fcJA== X-Gm-Message-State: AOAM530wRILzxAMgywkgRH4yHwGPVk5lnOUZJ+hoK0hzhAVq4L8X3VY4 lGrww78C+0hImqeBN82CCyLt20KI/3w= X-Google-Smtp-Source: ABdhPJzFhPu6MxIliWrYx7BlAqWDvC1LTp6uI8FxSJyQJeYO+JKxmOQwyxXOX0XwY8FpVX7WQSGMEYCJYH0= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a05:6e02:1807:b0:2ca:4b88:1a42 with SMTP id a7-20020a056e02180700b002ca4b881a42mr312136ilv.258.1650059963514; Fri, 15 Apr 2022 14:59:23 -0700 (PDT) Date: Fri, 15 Apr 2022 21:59:00 +0000 In-Reply-To: <20220415215901.1737897-1-oupton@google.com> Message-Id: <20220415215901.1737897-17-oupton@google.com> Mime-Version: 1.0 References: <20220415215901.1737897-1-oupton@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [RFC PATCH 16/17] KVM: arm64: Enable parallel stage 2 MMU faults From: Oliver Upton To: kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Peter Shier , Ricardo Koller , Reiji Watanabe , Paolo Bonzini , Sean Christopherson , Ben Gardon , David Matlack , Oliver Upton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220415_145924_929215_D8E08392 X-CRM114-Status: GOOD ( 13.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Voila! Since the map walkers are able to work in parallel there is no need to take the write lock on a stage 2 memory abort. Relax locking on map operations and cross fingers we got it right. Signed-off-by: Oliver Upton --- arch/arm64/kvm/mmu.c | 21 +++------------------ 1 file changed, 3 insertions(+), 18 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 63cf18cdb978..2881051c3743 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1127,7 +1127,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); - bool use_read_lock = false; unsigned long fault_level = kvm_vcpu_trap_get_fault_level(vcpu); unsigned long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; @@ -1162,8 +1161,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (logging_active) { force_pte = true; vma_shift = PAGE_SHIFT; - use_read_lock = (fault_status == FSC_PERM && write_fault && - fault_granule == PAGE_SIZE); } else { vma_shift = get_vma_page_shift(vma, hva); } @@ -1267,15 +1264,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault && device) return -ENOEXEC; - /* - * To reduce MMU contentions and enhance concurrency during dirty - * logging dirty logging, only acquire read lock for permission - * relaxation. - */ - if (use_read_lock) - read_lock(&kvm->mmu_lock); - else - write_lock(&kvm->mmu_lock); + read_lock(&kvm->mmu_lock); + pgt = vcpu->arch.hw_mmu->pgt; if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; @@ -1322,8 +1312,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (fault_status == FSC_PERM && vma_pagesize == fault_granule) { ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot); } else { - WARN_ONCE(use_read_lock, "Attempted stage-2 map outside of write lock\n"); - ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, mmu_caches, true); @@ -1336,10 +1324,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } out_unlock: - if (use_read_lock) - read_unlock(&kvm->mmu_lock); - else - write_unlock(&kvm->mmu_lock); + read_unlock(&kvm->mmu_lock); kvm_set_pfn_accessed(pfn); kvm_release_pfn_clean(pfn); return ret != -EAGAIN ? ret : 0;