From patchwork Fri Oct 27 18:22:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13438949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FBFAC25B47 for ; Fri, 27 Oct 2023 18:23:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BCAF98001E; Fri, 27 Oct 2023 14:23:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B2D1180018; Fri, 27 Oct 2023 14:23:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9561B8001E; Fri, 27 Oct 2023 14:23:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7611780018 for ; Fri, 27 Oct 2023 14:23:12 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 58D3EA026F for ; Fri, 27 Oct 2023 18:23:12 +0000 (UTC) X-FDA: 81392063424.13.5005E4A Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf16.hostedemail.com (Postfix) with ESMTP id 6F496180020 for ; Fri, 27 Oct 2023 18:23:10 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=2G+sWKmh; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3DQA8ZQYKCC4cOKXTMQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--seanjc.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3DQA8ZQYKCC4cOKXTMQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--seanjc.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698430990; a=rsa-sha256; cv=none; b=GWHhw3bSZrKdNqwrGlWR2h6Zm6AH1LUxf4tbZ63qvXu+sgayxSLczlOC4owh0JRK3H5DJc vR9Uaw8pAwA+fQOPykTNtp9MjqAJzRcXELnOKeG8YXyBDRpI/akvyhk3o2FV+NfVIl7KCi 2UWQIfryvVvv7Te7HUU2B7FaWDtcEwE= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=2G+sWKmh; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3DQA8ZQYKCC4cOKXTMQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--seanjc.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3DQA8ZQYKCC4cOKXTMQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--seanjc.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698430990; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0auIV1DMPWYBI3KY46bLeGa1maTu7OZj6FOKCV8dRqg=; b=6mVsfwuS+F6FMOyxMt5rkT+FgwAXhv4aXeclprgvWVFZtsLYXaizJB0vDs0SYaJ0Lo0Cyj kqO9C0nYRRp62xeOIEJGZB+2yhCzufJnQGyuwa/BJpitdeI9QbmYZ67SwL7l4bwBqmHc05 RgHZgE4ST47mC2cvOt4At6eXkrbvvPA= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1cc281f1214so4821805ad.2 for ; Fri, 27 Oct 2023 11:23:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698430989; x=1699035789; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=0auIV1DMPWYBI3KY46bLeGa1maTu7OZj6FOKCV8dRqg=; b=2G+sWKmh9BUdOHWD3SwsIOD4KJodwGkFgC4qmsX5iFt6YwnI8gIZ/oH+ZyfRfYFdnH WGDe8ubtlIPRDVxDQpR3LUL7KdRo2e6x4Juoh/NVeZ1WpjXUE7cokABwZREffgq70Ons thaec+QSQkdFW4UhmlwE05gAtj4xTqaBt8B9Jzj8JwxkHJcrFfDhGcMo3gaEqJwgZEyG tLVwZpiDf+6blxWbbf/xNjO4H3fz6dv0YQBhUG3YS6eI6WWjLLeBzYaejIQAI+wEsRO7 2wHyI85jPO+NcJbXDp1PsaGjhbplA6sxSz+4PdxK5wrMWi+YpUNiKzjZtlKnQOuQLOZ3 Yl5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698430989; x=1699035789; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=0auIV1DMPWYBI3KY46bLeGa1maTu7OZj6FOKCV8dRqg=; b=OxAARk7MwJmGZK++2nNu16NrdfZ/3FmeWkLWcQE1ipJV9i9hY2ChcMEAzwHM3P9h5u nzpyaZ8u2UPS4RvnclWo3yYfxagDPM/Zgo2tD5y1qIyH1q2HyhQ+JC+Z/dVkxlem/ST+ cpZNSRjmi5olNPhr1cbN/QA0vwfZyENlYjTIlbYpuRI67pHk2SpIHXB98UBukAabdA/e nLiYHBhMisKof/MF7zDwAWKV749WxYk2vBqvcTxlbY2+LqOUq6A+HN/DgvSEOlLYeA/y 75d5pl6x7SD8c5+nLMiZQ+26c6VKwKQtwgUh3UXtuOtNiClIdYlnrSOCQG/hciyQcnmv 3jHg== X-Gm-Message-State: AOJu0YwlGpeg+xvL20AX1VVV85WrhaUIB9IhwTEn5O33hz2n5vdZVYvU 6l8QY+KV6QDp7SFeiiRC/5lZsIQtDIg= X-Google-Smtp-Source: AGHT+IFXrC3xkKZRDGXkVrFPzvpR3eGICNasom3Cu9rgQoclWToIjhDbekkK9eVZvTfAymJIx68sMWa+w9w= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:ee05:b0:1c9:d358:b3db with SMTP id z5-20020a170902ee0500b001c9d358b3dbmr54518plb.11.1698430989341; Fri, 27 Oct 2023 11:23:09 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 27 Oct 2023 11:22:04 -0700 In-Reply-To: <20231027182217.3615211-1-seanjc@google.com> Mime-Version: 1.0 References: <20231027182217.3615211-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.820.g83a721a137-goog Message-ID: <20231027182217.3615211-23-seanjc@google.com> Subject: [PATCH v13 22/35] KVM: Allow arch code to track number of memslot address spaces per VM From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Alexander Viro , Christian Brauner , "Matthew Wilcox (Oracle)" , Andrew Morton Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Xiaoyao Li , Xu Yilun , Chao Peng , Fuad Tabba , Jarkko Sakkinen , Anish Moorthy , David Matlack , Yu Zhang , Isaku Yamahata , " =?utf-8?q?Micka=C3=ABl_Sala?= =?utf-8?q?=C3=BCn?= " , Vlastimil Babka , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A . Shutemov" X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 6F496180020 X-Stat-Signature: 4d8rzwuq1dits5o3zcp5df4rqf7co1wz X-HE-Tag: 1698430990-307995 X-HE-Meta: U2FsdGVkX194C5zMZdDZxDAmnxZzaB/N12qbIFlUE1HvIYZLpeqZXVsX+czxNhd0uIFI+sTncJ2rR9XnQ+LMgxLmVeK3vpy+Gziu1antaXeNf+/H0AxF5qU3AJ//0XkUDIzsmf0U/jSiUghzcE7tVqZmqZGCw4WTUrtlB2TDbSgnWzoGXd/bYqID3P2Jh4DCBzDAe31da91GeySmMe0bHtwuQ10d2kJrj0tuYPV84/7jOFaj54tLt6uNi+bILc3lbTi4QUSTLhCNntkTXmTitVKzd1tsCwm+1wcm6Mma8bpflUy6Wfi7qX1uUvfwhGJnNKZgMHE8CxtROMxGJkD1A6t0LbhWjWMZK+Re2Nse11RCpPs+s6DASsube0uELxHG0lLa19SmIv9EyxsxifEXEF+2A3NFFMh1v4Y5LX3JAkBF2uR56+gWeTxNxf6pzCFG3qIBXuxTiBmuYWhXf/40rPb+4m0A1GFU1L9NDKkj6jyLSmQll+5vhs4roFpXY7eel2u6C83Jozt/taJq87vDB4ogoa5/rrLfB86bZUyTGJ1uQ6Ov5dEHMT8qhNdgqR+918H76la+eNF3dvcS6Ue4vmqlrLzZvljQwhT+xPNoktcW5qsaJAC5C3KSJR6H9wpNeIxZ8KZVjYog9ZjPkMKoz9Nv74W9B/d3z1h7LlbBE+TBOxdCGXdA51kJyU/b7Urr7OdEHOAMCTYmbOOC2Y0rRg994LcISP7MUBKGdrR7AAcQ8szIu2mF/8oV8JqCiO3t35/tawCePAZ1prFYnV+idWrgvhzttFdQ/abYtDuVfsF363xN5GshxN1ZVNrM0ee/Wnc2zYNZIlSaL5pFUSyFx/qpZGhjDO+a7x2J8zWLLT//gAN0Ly3j1H833o1PJ9P+qtLgkFbhiyPn1dFOq92EDW3x9ox4NlOa7EYeLiZRDSUvVk/foXYh9DiVKqBwQB80smq4/r3UAaRvP48JvMz E/jfcJ37 iXz0ixyn55UOmXk9swhOm/6PDFsSs3yCd1pGCW8u1z++PU+ULVODntxw90seH6LyxjBLWmzFUlqu2fHN2YPADfaIcgNc5tN+OwmUKemxFUUktn8ViS9cuQ9vhg/nwf0YHJ88kuL67KPGvZvvLB9K+qIROP0UjgIJs5M0WxLtn3UruDJGhHIzT3M69ft7Ghbp9LArb9cxkidHT0re/hjCTeZrURsH3Qd2yp7cGDHjYsZFB6zagrHldTJdyVYQ9pW0qhNnGikHhkCSvZlbQttqsULcEdpME78J+1PiHv4h2E3tevYMGCdfqcq2t4asBfCV4BWzIx6b1eYc5rAThLPtqsEOb1CM+kh3beu5aSgnTE759YCR1kWxj6VCsIhwmjKDU4Glr8UIq8bU4dexgFxFZhnlGH9EbtRSM4MoQ+vqoe66K6i4uFF78+rxK5H6OVmRHiM5zJy8GYATNipExslctnG+Vzn9sokATWe6gOTVpZ4AYyD6DbocLE9upPpZAm87oZfDNoX7LjKl4Sdz2L0ttkF5dnEt19opEyWnIUsERpCLCYgKSWg5XagYM5GxqA1eU3/r8n75T+MCGVthJd5bxlQjA/GIm78++8SgHgD1z8esJ7Lr98pOrS8uJYAofZoVLLz3jbufmtOxjH7hwdWFKVrhs7USoehHjy1L+X5So/8LuGUybHC7t8uYpzLmrC02jVEkNPwbsRpeGrA5xnGIHZyUe29tvyT4GpM1XOQ/y/Fl05E665tR6LHGDzc4IgCYC0FHRqo6aqYu86m0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let x86 track the number of address spaces on a per-VM basis so that KVM can disallow SMM memslots for confidential VMs. Confidentials VMs are fundamentally incompatible with emulating SMM, which as the name suggests requires being able to read and write guest memory and register state. Disallowing SMM will simplify support for guest private memory, as KVM will not need to worry about tracking memory attributes for multiple address spaces (SMM is the only "non-default" address space across all architectures). Signed-off-by: Sean Christopherson Reviewed-by: Paolo Bonzini Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/powerpc/kvm/book3s_hv.c | 2 +- arch/x86/include/asm/kvm_host.h | 8 +++++++- arch/x86/kvm/debugfs.c | 2 +- arch/x86/kvm/mmu/mmu.c | 6 +++--- arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 17 +++++++++++------ virt/kvm/dirty_ring.c | 2 +- virt/kvm/kvm_main.c | 26 ++++++++++++++------------ 8 files changed, 39 insertions(+), 26 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 130bafdb1430..9b0eaa17275a 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -6084,7 +6084,7 @@ static int kvmhv_svm_off(struct kvm *kvm) } srcu_idx = srcu_read_lock(&kvm->srcu); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { struct kvm_memory_slot *memslot; struct kvm_memslots *slots = __kvm_memslots(kvm, i); int bkt; diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6702f795c862..f9e8d5642069 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2124,9 +2124,15 @@ enum { #define HF_SMM_MASK (1 << 1) #define HF_SMM_INSIDE_NMI_MASK (1 << 2) -# define KVM_ADDRESS_SPACE_NUM 2 +# define KVM_MAX_NR_ADDRESS_SPACES 2 # define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) # define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) + +static inline int kvm_arch_nr_memslot_as_ids(struct kvm *kvm) +{ + return KVM_MAX_NR_ADDRESS_SPACES; +} + #else # define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, 0) #endif diff --git a/arch/x86/kvm/debugfs.c b/arch/x86/kvm/debugfs.c index ee8c4c3496ed..42026b3f3ff3 100644 --- a/arch/x86/kvm/debugfs.c +++ b/arch/x86/kvm/debugfs.c @@ -111,7 +111,7 @@ static int kvm_mmu_rmaps_stat_show(struct seq_file *m, void *v) mutex_lock(&kvm->slots_lock); write_lock(&kvm->mmu_lock); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { int bkt; slots = __kvm_memslots(kvm, i); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c4e758f0aebb..baeba8fc1c38 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3755,7 +3755,7 @@ static int mmu_first_shadow_root_alloc(struct kvm *kvm) kvm_page_track_write_tracking_enabled(kvm)) goto out_success; - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { slots = __kvm_memslots(kvm, i); kvm_for_each_memslot(slot, bkt, slots) { /* @@ -6294,7 +6294,7 @@ static bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_e if (!kvm_memslots_have_rmaps(kvm)) return flush; - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { slots = __kvm_memslots(kvm, i); kvm_for_each_memslot_in_gfn_range(&iter, slots, gfn_start, gfn_end) { @@ -6791,7 +6791,7 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen) * modifier prior to checking for a wrap of the MMIO generation so * that a wrap in any address space is detected. */ - gen &= ~((u64)KVM_ADDRESS_SPACE_NUM - 1); + gen &= ~((u64)kvm_arch_nr_memslot_as_ids(kvm) - 1); /* * The very rare case: if the MMIO generation number has wrapped, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 824b58b44382..c4d17727b199 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12456,7 +12456,7 @@ void __user * __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, hva = slot->userspace_addr; } - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { struct kvm_userspace_memory_region2 m; m.slot = id | (i << 16); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c3cfe08b1300..687589ce9f63 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -80,8 +80,8 @@ /* Two fragments for cross MMIO pages. */ #define KVM_MAX_MMIO_FRAGMENTS 2 -#ifndef KVM_ADDRESS_SPACE_NUM -#define KVM_ADDRESS_SPACE_NUM 1 +#ifndef KVM_MAX_NR_ADDRESS_SPACES +#define KVM_MAX_NR_ADDRESS_SPACES 1 #endif /* @@ -692,7 +692,12 @@ bool kvm_arch_irqchip_in_kernel(struct kvm *kvm); #define KVM_MEM_SLOTS_NUM SHRT_MAX #define KVM_USER_MEM_SLOTS (KVM_MEM_SLOTS_NUM - KVM_INTERNAL_MEM_SLOTS) -#if KVM_ADDRESS_SPACE_NUM == 1 +#if KVM_MAX_NR_ADDRESS_SPACES == 1 +static inline int kvm_arch_nr_memslot_as_ids(struct kvm *kvm) +{ + return KVM_MAX_NR_ADDRESS_SPACES; +} + static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu) { return 0; @@ -747,9 +752,9 @@ struct kvm { struct mm_struct *mm; /* userspace tied to this vm */ unsigned long nr_memslot_pages; /* The two memslot sets - active and inactive (per address space) */ - struct kvm_memslots __memslots[KVM_ADDRESS_SPACE_NUM][2]; + struct kvm_memslots __memslots[KVM_MAX_NR_ADDRESS_SPACES][2]; /* The current active memslot set for each address space */ - struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; + struct kvm_memslots __rcu *memslots[KVM_MAX_NR_ADDRESS_SPACES]; struct xarray vcpu_array; /* * Protected by slots_lock, but can be read outside if an @@ -1018,7 +1023,7 @@ void kvm_put_kvm_no_destroy(struct kvm *kvm); static inline struct kvm_memslots *__kvm_memslots(struct kvm *kvm, int as_id) { - as_id = array_index_nospec(as_id, KVM_ADDRESS_SPACE_NUM); + as_id = array_index_nospec(as_id, KVM_MAX_NR_ADDRESS_SPACES); return srcu_dereference_check(kvm->memslots[as_id], &kvm->srcu, lockdep_is_held(&kvm->slots_lock) || !refcount_read(&kvm->users_count)); diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c index c1cd7dfe4a90..86d267db87bb 100644 --- a/virt/kvm/dirty_ring.c +++ b/virt/kvm/dirty_ring.c @@ -58,7 +58,7 @@ static void kvm_reset_dirty_gfn(struct kvm *kvm, u32 slot, u64 offset, u64 mask) as_id = slot >> 16; id = (u16)slot; - if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS) + if (as_id >= kvm_arch_nr_memslot_as_ids(kvm) || id >= KVM_USER_MEM_SLOTS) return; memslot = id_to_memslot(__kvm_memslots(kvm, as_id), id); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5d1a2f1b4e94..23633984142f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -615,7 +615,7 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, idx = srcu_read_lock(&kvm->srcu); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { struct interval_tree_node *node; slots = __kvm_memslots(kvm, i); @@ -1248,7 +1248,7 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) goto out_err_no_irq_srcu; refcount_set(&kvm->users_count, 1); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { for (j = 0; j < 2; j++) { slots = &kvm->__memslots[i][j]; @@ -1398,7 +1398,7 @@ static void kvm_destroy_vm(struct kvm *kvm) #endif kvm_arch_destroy_vm(kvm); kvm_destroy_devices(kvm); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { kvm_free_memslots(kvm, &kvm->__memslots[i][0]); kvm_free_memslots(kvm, &kvm->__memslots[i][1]); } @@ -1681,7 +1681,7 @@ static void kvm_swap_active_memslots(struct kvm *kvm, int as_id) * space 0 will use generations 0, 2, 4, ... while address space 1 will * use generations 1, 3, 5, ... */ - gen += KVM_ADDRESS_SPACE_NUM; + gen += kvm_arch_nr_memslot_as_ids(kvm); kvm_arch_memslots_updated(kvm, gen); @@ -2051,7 +2051,7 @@ int __kvm_set_memory_region(struct kvm *kvm, (mem->guest_memfd_offset & (PAGE_SIZE - 1) || mem->guest_memfd_offset + mem->memory_size < mem->guest_memfd_offset)) return -EINVAL; - if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM) + if (as_id >= kvm_arch_nr_memslot_as_ids(kvm) || id >= KVM_MEM_SLOTS_NUM) return -EINVAL; if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr) return -EINVAL; @@ -2187,7 +2187,7 @@ int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, as_id = log->slot >> 16; id = (u16)log->slot; - if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS) + if (as_id >= kvm_arch_nr_memslot_as_ids(kvm) || id >= KVM_USER_MEM_SLOTS) return -EINVAL; slots = __kvm_memslots(kvm, as_id); @@ -2249,7 +2249,7 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) as_id = log->slot >> 16; id = (u16)log->slot; - if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS) + if (as_id >= kvm_arch_nr_memslot_as_ids(kvm) || id >= KVM_USER_MEM_SLOTS) return -EINVAL; slots = __kvm_memslots(kvm, as_id); @@ -2361,7 +2361,7 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, as_id = log->slot >> 16; id = (u16)log->slot; - if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS) + if (as_id >= kvm_arch_nr_memslot_as_ids(kvm) || id >= KVM_USER_MEM_SLOTS) return -EINVAL; if (log->first_page & 63) @@ -2502,7 +2502,7 @@ static __always_inline void kvm_handle_gfn_range(struct kvm *kvm, gfn_range.only_private = false; gfn_range.only_shared = false; - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { slots = __kvm_memslots(kvm, i); kvm_for_each_memslot_in_gfn_range(&iter, slots, range->start, range->end) { @@ -4857,9 +4857,11 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) case KVM_CAP_IRQ_ROUTING: return KVM_MAX_IRQ_ROUTES; #endif -#if KVM_ADDRESS_SPACE_NUM > 1 +#if KVM_MAX_NR_ADDRESS_SPACES > 1 case KVM_CAP_MULTI_ADDRESS_SPACE: - return KVM_ADDRESS_SPACE_NUM; + if (kvm) + return kvm_arch_nr_memslot_as_ids(kvm); + return KVM_MAX_NR_ADDRESS_SPACES; #endif case KVM_CAP_NR_MEMSLOTS: return KVM_USER_MEM_SLOTS; @@ -4967,7 +4969,7 @@ bool kvm_are_all_memslots_empty(struct kvm *kvm) lockdep_assert_held(&kvm->slots_lock); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { if (!kvm_memslots_empty(__kvm_memslots(kvm, i))) return false; }