From patchwork Thu Jan 13 22:18:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 12713157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FBA1C433FE for ; Thu, 13 Jan 2022 22:18:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237300AbiAMWSn (ORCPT ); Thu, 13 Jan 2022 17:18:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231331AbiAMWSk (ORCPT ); Thu, 13 Jan 2022 17:18:40 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CA42C061574 for ; Thu, 13 Jan 2022 14:18:40 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id z11-20020a1709027e8b00b0014a642aacc6so6820434pla.10 for ; Thu, 13 Jan 2022 14:18:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kNn8CYLhEqc3+fS1C1muCti5DqMgFmeI9zA8ty3/McY=; b=fLwML1MGmC9bc6Wo7i480zfdmhc+4Ntn04+0xqyjmDbGucX2bCf6ytZRLLEoQMcpo9 v774JnRrqppw6D5Yecex6+/C7cyyrZXSJ1izW5+7i45MA5Pn+iKu9gnB1eOh4S5Npl9e c3x4tsHkiRreJ4dQ5kuFVaJQxNuPt390I17u+OBOYhQpEdobx1sGZI+k5SQz4QJtTjdP CkrTbseCrGhKTHsVsmROrGrKf9drkz9lglGp2n5Q3hdJ9hCoJWK+RiM4eok8gXvgIdi1 0SFAXpbnw4n1VaT7AkMQ9kfPHCys0L4zXCVMDQSnv2t+3kEcSgBLwEojN/UyxpQ5Ygp9 UQVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kNn8CYLhEqc3+fS1C1muCti5DqMgFmeI9zA8ty3/McY=; b=O7XRPcR1M9/+1KBI1pHz5gR9taNdEgG4yFDzUj10KNblsQztNqUt7fUfgyvh38pa40 xyc4cDJ1Q937YCkrlnz4PXwo7zi505qEkina513uycP/h+BhVdbHwFZPPqr1VPvYpFln ADUk8lXMWxz0g90/asPpEn0jD6CJZFE7vop3965amKHXRgllnAGJCtSiiX21gK5DOydJ 4/RT9rr8YnyWMGFjr4HEd7oV2A8Zp+Zgf0z/iwk5fyMrrjCRXdBHarMC7d2zwcA2gR9k M/xYhyHpNO0ZruH2g1+EQ3rnMKhigVLhaAAMUtA7NlDsGio7h3yDrByQZkA5gLxg7Qm/ ujZA== X-Gm-Message-State: AOAM531IpCFBtjiXvC5ENN61qPhxQM46A7WRc735EjXcXvDk4OYphom+ at5Gw7sveCAjlL4XdaltNCYDg5/dswFy5qEDCM5vdRbVpJZjkV/8m0oXCahStk3Kmj6AU9phoLT YfAaJROGKqD5rL1z+1zP0G/aogFogFTMmnNPmhNPLV6sY0s1VDIXDiML2y3r+iAr8EyZ78xA= X-Google-Smtp-Source: ABdhPJygh5Zh8CaBIT6Xl1g/8j+z4y5uK0T08CAIYoL2fo6gEeGUaUNZQxTqiuma8dYE/FnvSZQ1zbPTPyyaHhj5pQ== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a62:6245:0:b0:4bf:90e9:4c78 with SMTP id w66-20020a626245000000b004bf90e94c78mr6184710pfb.78.1642112319642; Thu, 13 Jan 2022 14:18:39 -0800 (PST) Date: Thu, 13 Jan 2022 22:18:27 +0000 In-Reply-To: <20220113221829.2785604-1-jingzhangos@google.com> Message-Id: <20220113221829.2785604-2-jingzhangos@google.com> Mime-Version: 1.0 References: <20220113221829.2785604-1-jingzhangos@google.com> X-Mailer: git-send-email 2.34.1.703.g22d0c6ccf7-goog Subject: [PATCH v1 1/3] KVM: arm64: Use read/write spin lock for MMU protection From: Jing Zhang To: KVM , KVMARM , Marc Zyngier , Will Deacon , Paolo Bonzini , David Matlack , Oliver Upton , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta Cc: Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Replace MMU spinlock with rwlock and update all instances of the lock being acquired with a write lock acquisition. Future commit will add a fast path for permission relaxation during dirty logging under a read lock. Signed-off-by: Jing Zhang --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/kvm/mmu.c | 36 +++++++++++++++---------------- 2 files changed, 20 insertions(+), 18 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 3b44ea17af88..6c99c0335bae 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -50,6 +50,8 @@ #define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \ KVM_DIRTY_LOG_INITIALLY_SET) +#define KVM_HAVE_MMU_RWLOCK + /* * Mode of operation configurable with kvm-arm.mode early param. * See Documentation/admin-guide/kernel-parameters.txt for more information. diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index bc2aba953299..cafd5813c949 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -58,7 +58,7 @@ static int stage2_apply_range(struct kvm *kvm, phys_addr_t addr, break; if (resched && next != end) - cond_resched_lock(&kvm->mmu_lock); + cond_resched_rwlock_write(&kvm->mmu_lock); } while (addr = next, addr != end); return ret; @@ -179,7 +179,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); phys_addr_t end = start + size; - assert_spin_locked(&kvm->mmu_lock); + lockdep_assert_held_write(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); WARN_ON(stage2_apply_range(kvm, start, end, kvm_pgtable_stage2_unmap, may_block)); @@ -213,13 +213,13 @@ static void stage2_flush_vm(struct kvm *kvm) int idx, bkt; idx = srcu_read_lock(&kvm->srcu); - spin_lock(&kvm->mmu_lock); + write_lock(&kvm->mmu_lock); slots = kvm_memslots(kvm); kvm_for_each_memslot(memslot, bkt, slots) stage2_flush_memslot(kvm, memslot); - spin_unlock(&kvm->mmu_lock); + write_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, idx); } @@ -720,13 +720,13 @@ void stage2_unmap_vm(struct kvm *kvm) idx = srcu_read_lock(&kvm->srcu); mmap_read_lock(current->mm); - spin_lock(&kvm->mmu_lock); + write_lock(&kvm->mmu_lock); slots = kvm_memslots(kvm); kvm_for_each_memslot(memslot, bkt, slots) stage2_unmap_memslot(kvm, memslot); - spin_unlock(&kvm->mmu_lock); + write_unlock(&kvm->mmu_lock); mmap_read_unlock(current->mm); srcu_read_unlock(&kvm->srcu, idx); } @@ -736,14 +736,14 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); struct kvm_pgtable *pgt = NULL; - spin_lock(&kvm->mmu_lock); + write_lock(&kvm->mmu_lock); pgt = mmu->pgt; if (pgt) { mmu->pgd_phys = 0; mmu->pgt = NULL; free_percpu(mmu->last_vcpu_ran); } - spin_unlock(&kvm->mmu_lock); + write_unlock(&kvm->mmu_lock); if (pgt) { kvm_pgtable_stage2_destroy(pgt); @@ -783,10 +783,10 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, if (ret) break; - spin_lock(&kvm->mmu_lock); + write_lock(&kvm->mmu_lock); ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, &cache); - spin_unlock(&kvm->mmu_lock); + write_unlock(&kvm->mmu_lock); if (ret) break; @@ -834,9 +834,9 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) start = memslot->base_gfn << PAGE_SHIFT; end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; - spin_lock(&kvm->mmu_lock); + write_lock(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); - spin_unlock(&kvm->mmu_lock); + write_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs(kvm); } @@ -1212,7 +1212,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault && device) return -ENOEXEC; - spin_lock(&kvm->mmu_lock); + write_lock(&kvm->mmu_lock); pgt = vcpu->arch.hw_mmu->pgt; if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; @@ -1271,7 +1271,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } out_unlock: - spin_unlock(&kvm->mmu_lock); + write_unlock(&kvm->mmu_lock); kvm_set_pfn_accessed(pfn); kvm_release_pfn_clean(pfn); return ret != -EAGAIN ? ret : 0; @@ -1286,10 +1286,10 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) trace_kvm_access_fault(fault_ipa); - spin_lock(&vcpu->kvm->mmu_lock); + write_lock(&vcpu->kvm->mmu_lock); mmu = vcpu->arch.hw_mmu; kpte = kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa); - spin_unlock(&vcpu->kvm->mmu_lock); + write_unlock(&vcpu->kvm->mmu_lock); pte = __pte(kpte); if (pte_valid(pte)) @@ -1692,9 +1692,9 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, gpa_t gpa = slot->base_gfn << PAGE_SHIFT; phys_addr_t size = slot->npages << PAGE_SHIFT; - spin_lock(&kvm->mmu_lock); + write_lock(&kvm->mmu_lock); unmap_stage2_range(&kvm->arch.mmu, gpa, size); - spin_unlock(&kvm->mmu_lock); + write_unlock(&kvm->mmu_lock); } /* From patchwork Thu Jan 13 22:18:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 12713158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42540C4332F for ; Thu, 13 Jan 2022 22:18:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237317AbiAMWSp (ORCPT ); Thu, 13 Jan 2022 17:18:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237269AbiAMWSm (ORCPT ); Thu, 13 Jan 2022 17:18:42 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE178C06161C for ; Thu, 13 Jan 2022 14:18:41 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id c10-20020a63a40a000000b0034afd8ee07aso672429pgf.17 for ; Thu, 13 Jan 2022 14:18:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pwLxSdkuKZYpy4IRCx6ewNb1syHurmpU9IGNHdDfbA0=; b=rD/769dvm1H1BTBbwgghKhX4b92nzIRCaIKmcrnh4WzVQqJEJBcWRfps+7N2g3HS+x Hy6ldB/NJYp0mPLSelkO9odPUHLDQ4ezsgpdKG8ybPdGy+ImlE1nHVSVm5KK6N/hkZgI WhAomOtbRIW3OHpMw+aTJfyEJ8cPRkCkBWSOyVG3mVbCfSc9HeIbdCWfgQRdtUuEqLOu Pbf+E3VJebX0n1aj2hyLkZhd3hNnhS/HnpsI1LP0SQOFNSlugXkWSTE2nxjhl+m39/lY eg0xg55XHbxkeHCrqbXOA7mW/d6QDD2WjXELdxPvN27vD5d4wJMjlwzJNcYSeHT5O9wL W4jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pwLxSdkuKZYpy4IRCx6ewNb1syHurmpU9IGNHdDfbA0=; b=MXIuahsk/oooijVOt/dB2HaNgW789vU4KyJIy+qDX4Tnk2qgskN0WJbOpvTMjIQyaq 4Tx1JJFmaEiKLF+gkIa+Z3IvH2dWDtBwCgxW19ynMrefXC/PjEQgO9pCV/zz5DN9OrAN gbGjgMt4rAxX9xiZxNajD0aivO/ixj4twzASQTEi6/PY7Wdz5cQmMboueeg7KjCe7Okj OF5xwXWe5G0FTpLmmGgIj8RImlbE1Qb2MjKDgmkyKTMmlCpkGm17OFKLVYKjM2wsM5Q2 Hu9qYtwNdoFG7VX9p4PRxJTBtindgxmzJb6pQ3GIxzTix1hvadM4zaYJ7c5wm6mR5/6T jL4Q== X-Gm-Message-State: AOAM533kzxEEjG0aVzx1rHCtf9Y/c6EMjBFjiyohexYlzLLsgQYNEXya /HIZYc1GCBY77leW0QKZ+lo775Jf973wSLvOq2Uwis9z0dXA8k1w0v14/N3VmlL17jovQPjWlLW VTa24nhr47/6XU+OFwhBNqTyUCF14PSpggsKR+MzrGgrVv2XzOlkjOKFqwrGKqEaXPz9HBtk= X-Google-Smtp-Source: ABdhPJwsvrCgSfYRR7e1YNmhGpuBRd1yPxOOJCRdr2s7Im0Et+POYfDI17crL/39sHPbnMnwBxTx4LLYYBN8+iJeMg== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:a17:902:c102:b0:14a:8aef:8897 with SMTP id 2-20020a170902c10200b0014a8aef8897mr2252771pli.155.1642112321293; Thu, 13 Jan 2022 14:18:41 -0800 (PST) Date: Thu, 13 Jan 2022 22:18:28 +0000 In-Reply-To: <20220113221829.2785604-1-jingzhangos@google.com> Message-Id: <20220113221829.2785604-3-jingzhangos@google.com> Mime-Version: 1.0 References: <20220113221829.2785604-1-jingzhangos@google.com> X-Mailer: git-send-email 2.34.1.703.g22d0c6ccf7-goog Subject: [PATCH v1 2/3] KVM: arm64: Add fast path to handle permission relaxation during dirty logging From: Jing Zhang To: KVM , KVMARM , Marc Zyngier , Will Deacon , Paolo Bonzini , David Matlack , Oliver Upton , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta Cc: Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To reduce MMU lock contention during dirty logging, all permission relaxation operations would be performed under read lock. Signed-off-by: Jing Zhang --- arch/arm64/kvm/mmu.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index cafd5813c949..15393cb61a3f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1084,6 +1084,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, unsigned long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; + bool use_mmu_readlock = false; fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level); write_fault = kvm_is_write_fault(vcpu); @@ -1212,7 +1213,19 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault && device) return -ENOEXEC; - write_lock(&kvm->mmu_lock); + if (fault_status == FSC_PERM && fault_granule == PAGE_SIZE + && logging_active && write_fault) + use_mmu_readlock = true; + /* + * To reduce MMU contentions and enhance concurrency during dirty + * logging dirty logging, only acquire read lock for permission + * relaxation. This fast path would greatly reduce the performance + * degradation of guest workloads. + */ + if (use_mmu_readlock) + read_lock(&kvm->mmu_lock); + else + write_lock(&kvm->mmu_lock); pgt = vcpu->arch.hw_mmu->pgt; if (mmu_notifier_retry(kvm, mmu_seq)) goto out_unlock; @@ -1271,7 +1284,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } out_unlock: - write_unlock(&kvm->mmu_lock); + if (use_mmu_readlock) + read_unlock(&kvm->mmu_lock); + else + write_unlock(&kvm->mmu_lock); kvm_set_pfn_accessed(pfn); kvm_release_pfn_clean(pfn); return ret != -EAGAIN ? ret : 0; From patchwork Thu Jan 13 22:18:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jing Zhang X-Patchwork-Id: 12713159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 243EBC43217 for ; Thu, 13 Jan 2022 22:18:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237322AbiAMWSq (ORCPT ); Thu, 13 Jan 2022 17:18:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237294AbiAMWSn (ORCPT ); Thu, 13 Jan 2022 17:18:43 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5589EC06161C for ; Thu, 13 Jan 2022 14:18:43 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id o7-20020a17090a3d4700b001b4243e9ea2so2896407pjf.6 for ; Thu, 13 Jan 2022 14:18:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=mvimdalggRqv44+8DxSjpX0UJ7y65xJ7W9TGgiRekLQ=; b=fft9JkIaW7xsTLU71vFw/aLtoriiT7J7UJy8sqPLByjufAN+uQyN9czAcyq7ru4MJ2 XyDtG8fy8Ce6vRZp3Pp6GJIN3SGHQcfVaJH8Fhs9x5mCXaUHfm6nittnC5lR0qEnR5cO 4dG/zu8RNHu3TX6ghTUpu9qGZtxPzYopmTU2vArQoDkkDDA/Rb5DTGdJabqk/kVP4J+H nq2GuwAlwP0T5XaD2C3Yn69oznKkkhcM2cX2AVysyDTXnF43HhzZVkANdQzBufRjxMFV PHllAcNDm2BV1skpRUuXZJYtsATDIEi0rHDfdPS+n6qSRUpTE1BzSOBVX0YqdIcS43/J 477g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=mvimdalggRqv44+8DxSjpX0UJ7y65xJ7W9TGgiRekLQ=; b=jT3ofiMZ+r1du0HrcxOKhpaQyn2Zr1SgUk17sOfXJgsy3zWQB4XRM3zSDF44zvozsM 68larPdiApUql/Gtv7t9zUF2rtuwgw0IzXX/O/F8mbChbzc2xROFF361W5Q6gLWzki8U I+HvNqFPuMnSNUUVZ5XO/d1rxqwpqAU7kraDHzVY+hgRKb2451gZTvvBuhWKRpMRBAU5 hIHh7phLv1zEHHsEiYtvMLlfw1JfwGzJUwSEacEwkFVNMenYYccYd/Oi/rdg3vznnjkE AScxlamiFlL7wkMXwN0iVcLNV1R7pA0pkcEzbcs3C6+XQabDLnjO1twkEAZzDsoeR/Jz B9AQ== X-Gm-Message-State: AOAM5326HPrvdh5VBD4dzAwhcJBwC/9rXg7HYyAI1GZpG70mB77fv1Dd 5ZuInJfRUQDR5Aj0WZrlrMM2jkS0nHkzo9L8WHsKsgwJ1N5TvgCKxtip+5ugXGq9/FkZpHaRaNb jRmoIIdFX9kaUPLW9uv8QpQN5cRHjlEnYRcv1l7QCG6OYH+POC6oA/OfGGScHAIpctGttiKI= X-Google-Smtp-Source: ABdhPJwbPHsrVTSgeSxfOcMPF5wphsJcDfZTXxLxP4XcJSJkbiS8jQ2Gsj6Ba5PUTFHp4GfFLXvfvXZq5tEe3bhpKQ== X-Received: from jgzg.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1acf]) (user=jingzhangos job=sendgmr) by 2002:aa7:9ec7:0:b0:4be:19fa:f0f3 with SMTP id r7-20020aa79ec7000000b004be19faf0f3mr6129109pfq.8.1642112322722; Thu, 13 Jan 2022 14:18:42 -0800 (PST) Date: Thu, 13 Jan 2022 22:18:29 +0000 In-Reply-To: <20220113221829.2785604-1-jingzhangos@google.com> Message-Id: <20220113221829.2785604-4-jingzhangos@google.com> Mime-Version: 1.0 References: <20220113221829.2785604-1-jingzhangos@google.com> X-Mailer: git-send-email 2.34.1.703.g22d0c6ccf7-goog Subject: [PATCH v1 3/3] KVM: selftests: Add vgic initialization for dirty log perf test for ARM From: Jing Zhang To: KVM , KVMARM , Marc Zyngier , Will Deacon , Paolo Bonzini , David Matlack , Oliver Upton , Reiji Watanabe , Ricardo Koller , Raghavendra Rao Ananta Cc: Jing Zhang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For ARM64, if no vgic is setup before the dirty log perf test, the userspace irqchip would be used, which would affect the dirty log perf test result. Signed-off-by: Jing Zhang --- tools/testing/selftests/kvm/dirty_log_perf_test.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c index 1954b964d1cf..b501338d9430 100644 --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c @@ -18,6 +18,12 @@ #include "test_util.h" #include "perf_test_util.h" #include "guest_modes.h" +#ifdef __aarch64__ +#include "aarch64/vgic.h" + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL +#endif /* How many host loops to run by default (one KVM_GET_DIRTY_LOG for each loop)*/ #define TEST_HOST_LOOP_N 2UL @@ -200,6 +206,10 @@ static void run_test(enum vm_guest_mode mode, void *arg) vm_enable_cap(vm, &cap); } +#ifdef __aarch64__ + vgic_v3_setup(vm, nr_vcpus, 64, GICD_BASE_GPA, GICR_BASE_GPA); +#endif + /* Start the iterations */ iteration = 0; host_quit = false;