From patchwork Thu Feb 8 15:18:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 13549961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C5B0EC4828F for ; Thu, 8 Feb 2024 15:21:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=s0G69IA+PzsMPrQk+/xVYhwCyeBzTm7D12wKij3de4E=; b=LaDNS+k6zpkxBo Z9FIhgPayZc+GTREaXLiGtoGZVjxrD5dJRIB0pWiOhB8mAT1EEs/qpxAmFIo9hpN/Yn7pyN9OylA9 Xn56HMSJvQsyt/brt4G6Ta1QyYcfU7es6UM5zYnAgsvTjdnc8otAhmMcolDflFzpAHd31IVSvuXxE XrGPUPmjNqbDCL8K2VjPe8mSbG9GwiIUItiwUiPY1MrE6thDeOQyRs/KjLPpFJiVlTNfzjIXPKJLR yzg2mnPTZ/mVWvLYkea+OqVvcUIu3PzkJA26vSN2sccI1aww1pbOk2d8/QHcBaBsgqKpzB1a/+Q72 EO3ITui5SEly5wWqFsFQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6D2-0000000E9Ku-1yNU; Thu, 08 Feb 2024 15:21:04 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rY6Cy-0000000E9Ig-3W4f for linux-arm-kernel@lists.infradead.org; Thu, 08 Feb 2024 15:21:03 +0000 Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4TW0vw1p6Xz6JB4K; Thu, 8 Feb 2024 23:17:16 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 1676C140CF4; Thu, 8 Feb 2024 23:20:59 +0800 (CST) Received: from A2303104131.china.huawei.com (10.202.227.28) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Thu, 8 Feb 2024 15:20:52 +0000 From: Shameer Kolothum To: , , , CC: , , , , , , , , Subject: [RFC PATCH v2 2/7] KVM: arm64: Introduce support to pin VMIDs Date: Thu, 8 Feb 2024 15:18:32 +0000 Message-ID: <20240208151837.35068-3-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> References: <20240208151837.35068-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.28] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240208_072101_302919_00897A9C X-CRM114-Status: GOOD ( 19.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce kvm_arm_pinned_vmid_get() and kvm_arm_pinned_vmid_put(), to pin a VMID associated with a KVM instance. This will guarantee that VMID remains the same after a rollover. This is in preparation of introducing support in the SMMUv3 driver to use the KVM VMID for S2 stage configuration in nested mode. Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/kvm_host.h | 3 ++ arch/arm64/kvm/vmid.c | 84 ++++++++++++++++++++++++++++++- 2 files changed, 86 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 21c57b812569..20fb00d29f48 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -141,6 +141,7 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages); struct kvm_vmid { atomic64_t id; + refcount_t pinned; }; struct kvm_s2_mmu { @@ -1097,6 +1098,8 @@ int __init kvm_arm_vmid_alloc_init(void); void __init kvm_arm_vmid_alloc_free(void); bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid); void kvm_arm_vmid_clear_active(void); +unsigned long kvm_arm_pinned_vmid_get(struct kvm_vmid *kvm_vmid); +void kvm_arm_pinned_vmid_put(struct kvm_vmid *kvm_vmid); static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch) { diff --git a/arch/arm64/kvm/vmid.c b/arch/arm64/kvm/vmid.c index 806223b7022a..0ffe24683071 100644 --- a/arch/arm64/kvm/vmid.c +++ b/arch/arm64/kvm/vmid.c @@ -25,6 +25,10 @@ static unsigned long *vmid_map; static DEFINE_PER_CPU(atomic64_t, active_vmids); static DEFINE_PER_CPU(u64, reserved_vmids); +static unsigned long max_pinned_vmids; +static unsigned long nr_pinned_vmids; +static unsigned long *pinned_vmid_map; + #define VMID_MASK (~GENMASK(kvm_arm_vmid_bits - 1, 0)) #define VMID_FIRST_VERSION (1UL << kvm_arm_vmid_bits) @@ -47,7 +51,10 @@ static void flush_context(void) int cpu; u64 vmid; - bitmap_zero(vmid_map, NUM_USER_VMIDS); + if (pinned_vmid_map) + bitmap_copy(vmid_map, pinned_vmid_map, NUM_USER_VMIDS); + else + bitmap_zero(vmid_map, NUM_USER_VMIDS); for_each_possible_cpu(cpu) { vmid = atomic64_xchg_relaxed(&per_cpu(active_vmids, cpu), 0); @@ -103,6 +110,14 @@ static u64 new_vmid(struct kvm_vmid *kvm_vmid) return newvmid; } + /* + * If it is pinned, we can keep using it. Note that reserved + * takes priority, because even if it is also pinned, we need to + * update the generation into the reserved_vmids. + */ + if (refcount_read(&kvm_vmid->pinned)) + return newvmid; + if (!__test_and_set_bit(vmid2idx(vmid), vmid_map)) { atomic64_set(&kvm_vmid->id, newvmid); return newvmid; @@ -174,6 +189,63 @@ bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid) return updated; } +unsigned long kvm_arm_pinned_vmid_get(struct kvm_vmid *kvm_vmid) +{ + unsigned long flags; + u64 vmid; + + if (!pinned_vmid_map) + return 0; + + raw_spin_lock_irqsave(&cpu_vmid_lock, flags); + + vmid = atomic64_read(&kvm_vmid->id); + + if (refcount_inc_not_zero(&kvm_vmid->pinned)) + goto out_unlock; + + if (nr_pinned_vmids >= max_pinned_vmids) { + vmid = 0; + goto out_unlock; + } + + /* + * If we went through one or more rollover since that VMID was + * used, make sure it is still valid, or generate a new one. + */ + if (!vmid_gen_match(vmid)) + vmid = new_vmid(kvm_vmid); + + nr_pinned_vmids++; + __set_bit(vmid2idx(vmid), pinned_vmid_map); + refcount_set(&kvm_vmid->pinned, 1); + +out_unlock: + raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags); + + vmid &= ~VMID_MASK; + + return vmid; +} + +void kvm_arm_pinned_vmid_put(struct kvm_vmid *kvm_vmid) +{ + unsigned long flags; + u64 vmid = atomic64_read(&kvm_vmid->id); + + if (!pinned_vmid_map) + return; + + raw_spin_lock_irqsave(&cpu_vmid_lock, flags); + + if (refcount_dec_and_test(&kvm_vmid->pinned)) { + __clear_bit(vmid2idx(vmid), pinned_vmid_map); + nr_pinned_vmids--; + } + + raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags); +} + /* * Initialize the VMID allocator */ @@ -191,10 +263,20 @@ int __init kvm_arm_vmid_alloc_init(void) if (!vmid_map) return -ENOMEM; + pinned_vmid_map = bitmap_zalloc(NUM_USER_VMIDS, GFP_KERNEL); + nr_pinned_vmids = 0; + + /* + * Ensure we have at least one emty slot available after rollover + * and maximum number of VMIDs are pinned. VMID#0 is reserved. + */ + max_pinned_vmids = NUM_USER_VMIDS - num_possible_cpus() - 2; + return 0; } void __init kvm_arm_vmid_alloc_free(void) { + bitmap_free(pinned_vmid_map); bitmap_free(vmid_map); }