From patchwork Thu Sep 14 01:55:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13384027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 361C7EE0203 for ; Thu, 14 Sep 2023 01:56:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 15DA98D0002; Wed, 13 Sep 2023 21:56:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1118E8D0001; Wed, 13 Sep 2023 21:56:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA4BA8D0002; Wed, 13 Sep 2023 21:56:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id CB8368D0001 for ; Wed, 13 Sep 2023 21:56:18 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A93471CABF7 for ; Thu, 14 Sep 2023 01:56:18 +0000 (UTC) X-FDA: 81233538036.13.E3E3C56 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) by imf30.hostedemail.com (Postfix) with ESMTP id DE81080005 for ; Thu, 14 Sep 2023 01:56:16 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=FeCKlhlG; spf=pass (imf30.hostedemail.com: domain of 3P2gCZQYKCEg2okxtmqyyqvo.mywvsx47-wwu5kmu.y1q@flex--seanjc.bounces.google.com designates 209.85.215.201 as permitted sender) smtp.mailfrom=3P2gCZQYKCEg2okxtmqyyqvo.mywvsx47-wwu5kmu.y1q@flex--seanjc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694656577; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kuaopqTQwIJoODQjwL8UCc8nafSHAqskK1PNRD1zt6Q=; b=cfLGYnbbOCIbFo39kWrtqUsrM+kBCXoJemD8dwt5lmPwMvRLa76Ahztiejlb2ixmlgfsGQ jT2ZGVqQV68VJu8lQeFHv4F0/sXfxqamfpYEQA+oj2udQEia2SiP4WTu+HIZY2Dk5ktM2q 1FmmJnNHMVsZGMFqEo1YSAkSlYrgrOA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694656577; a=rsa-sha256; cv=none; b=SM3sBj40fMLMW0eLLGZx/YL/dMMrp+yNu7nicXXOpNN9K+dLiYBL9cLgpbmb8J8rAtR4Cy tvzwCX4W/I42EbhlLFx67RqYq6fHY7FYdUkUdS+YmZLXC9veaCPSmrETqktd9VinlYDRP/ dkqUg4hpvh0j2oaJGpT9RdGth44zFWs= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=FeCKlhlG; spf=pass (imf30.hostedemail.com: domain of 3P2gCZQYKCEg2okxtmqyyqvo.mywvsx47-wwu5kmu.y1q@flex--seanjc.bounces.google.com designates 209.85.215.201 as permitted sender) smtp.mailfrom=3P2gCZQYKCEg2okxtmqyyqvo.mywvsx47-wwu5kmu.y1q@flex--seanjc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-55c7bb27977so360295a12.0 for ; Wed, 13 Sep 2023 18:56:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1694656576; x=1695261376; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=kuaopqTQwIJoODQjwL8UCc8nafSHAqskK1PNRD1zt6Q=; b=FeCKlhlGdsa+WHvpisI34IcMnNHfLIPPB1c9k8sxhKL8/r0YmVQQOr7Lh2xytp3Uri f/kzg0MclVi5uR30VkJFBbpCLzLLLMcC/aGXRtA+1nwTWaowBT4YN3QGm/BnAqvNNJ3W bT5XxE3tDZjdxcwL8ZkUagmXdhfmXUNA+UHOx282hYoGDq/q5nBf2MJDM7L8uFAPH+hV jBisMH24/q/9+x1/tOj2LVLSkHaFdaGCk2iDpPN6+JoWiZGzMq19UJhEtVgObyQsuP8R 5tPRWjG7R43Phn1zGELCQJVvHewaga3b5dl2MZjtC/wrTH8aziylACFwby4yUp4XrWHX gO7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694656576; x=1695261376; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=kuaopqTQwIJoODQjwL8UCc8nafSHAqskK1PNRD1zt6Q=; b=dD5sq/OhDIGnwzy33F8ioSGD4/vSrVAa86UdJ6YGzi3WiAk9F5XStirHGKiSvATK8L MyhZUbRTr7f+0u6GIR90Pb6CE/1MVLPlUS2kKgQom41yLjD9zeUQo+gTUBgGAdrjSMRz oEjBsJi0iqy9ZrJPZJIa+oRg9TdwIQvu1amVthGFt9iJUEyK3Ns5oeWRkcXS2BDe2Gwd 4rIEr3t6VyIFtvhCHU7rEtvej8LYXufweV8ynQb/mFC+LiHP4znv2HNhOLHQTQwqK3+/ b6IQFRJ78IflyegDVjl9drL6V1mWwLXxwUUHlFMbvhxiFNTEw5AaqhMgoNX9Cz9FR4cl kSrA== X-Gm-Message-State: AOJu0YwdwmiN5FW6u8Cm+ajC4ihhci9ICcMaf1iOdwgwHmCTm6PBcZSw UR1uj8BHLpBxPi5L1ixbhFqkjI9Dbh4= X-Google-Smtp-Source: AGHT+IFIYZjFKMw6WHtNJEOGL2Eon5EsR/54GlB+Thrn8G9XTPh/MDdknE8YBd4objuzaOAQYplftVof7WQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:ec83:b0:1c3:77cd:6520 with SMTP id x3-20020a170902ec8300b001c377cd6520mr179995plg.11.1694656575520; Wed, 13 Sep 2023 18:56:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 13 Sep 2023 18:55:18 -0700 In-Reply-To: <20230914015531.1419405-1-seanjc@google.com> Mime-Version: 1.0 References: <20230914015531.1419405-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog Message-ID: <20230914015531.1419405-21-seanjc@google.com> Subject: [RFC PATCH v12 20/33] KVM: Allow arch code to track number of memslot address spaces per VM From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , "Matthew Wilcox (Oracle)" , Andrew Morton , Paul Moore , James Morris , "Serge E. Hallyn" Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-kernel@vger.kernel.org, Chao Peng , Fuad Tabba , Jarkko Sakkinen , Anish Moorthy , Yu Zhang , Isaku Yamahata , Xu Yilun , Vlastimil Babka , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A . Shutemov" X-Rspamd-Queue-Id: DE81080005 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: zzjr9i9axyy659injcnsu57tjw44783o X-HE-Tag: 1694656576-982781 X-HE-Meta: U2FsdGVkX190+50P5BuS+LCLBCIYGpL2qSfH3g0sCornqkKGoGgCU+0Mnl1ymhZ+I/wB0R+d3w9TBUaBqWFXzznaPe4TIGYxluA+jscVXt07cm86GbnfbflEpWFIPKSjb8hqGF6N79ST0hFZaylfwXO9tkJW2AtnlWYJkHRTBDZwRy0F9+AjZKw5qoI9cLatToI5eOc4fMA42IhLHD5aLIWja1hB5wzslJz45VM0K/qj7xnfcFuAsyz68twh09cA+vXYXe2nlZ6k8pdCSloHImpSWn9J3Tsxy0/CrygLlAaEdjc+tHll+kEFF5r6s2MBYMBayyBdD28+WHxhfjN5K739JyxK74m0wTsovRNOpw+qyc1Lfr24PdmU+3FqjgVKVotKPOn5f+tMCnJFerCTksR15/U80AH2FQlV7lR0jhkp75xgAjjPhvxKUDbzPzVe74QHtUHh+PwTv8hQv/Lw6rqC3MPixso2YACCquJbY9jxest5aOqDUl2/EgsgZ/V8p+xYHQYDm35RaPkDQwbX0bFKY2GvVN+ukAQjgmFQA/LmVAwlRtaYQeemD5WkFKPIajQ4vucMGX4CQl4FOo3x53sLSCN1v+8uEHWtGZqcVfUjor4jILU2NpSAB6dEkfLXlO9UrQJDuRV7X8NgbIYagGAXKWyPryrqL99vyJZenEc/jRFVAkFDjsy1lpKO0aiDu1lTQKODVolpPicDr/lqnc55SO6uc4iYbl3Cb4qhk0vb69+tvHijOfDYkC/NI1slkY4TcK02DlZTHnTfx2HAtgQ6BCsxZP+fz3tgQdS5CUNakwD40OUVKJzQUIHXl0RIkn4XoHOlgemuNZlciq/7had2w8PC85bK1HLlBRlGCMIGxeO9fUZd6ONiL0PFBTKe3e3yDdk81gyY7Ohk9yJ/NXZwHqKM768D/hAEWmWdS8ZkGl0ChQIdhPe6qIxKjztMEq1MCssWLcRp7Ysew9o fojYVHHe ezE2CeB9oEq3VuCGgLoSgmoGMfD9cN11dmyUonqpJD1cixQEiSDLzyz1o1TGydFfgjRlGjHmKMfL8mOQeDoJTODsXxoQFz+V0nHCxKoUpeEdG4vromlxT29MtyTl0VpevO4bW8hz5BjN1LO1eDOFQjZfA3F4HdE7HtVqBErIuj595f/oxNvl4cioHh55rTFFgNjzkcyMYFYeUcNyeEiuBJ4lorclohvINzCcZzvHW6hNbJKGl2pnceiNeSmQbPbBd4tpvhRWyaLTHHI7xj2NrUk0j+yLXpAH9ey8FO/SerNr78JbBK6J/kQVAEwUHXSNyFM0V1sVbuqnm074OWeyTY8eshGD4nSykATfociXUm3vwplmlsZzK9+qYhe1KQNMGL27XCt3D9AX6QTlhrze13HdD4ES0df8OvKyKl6bs5ciXx1TVAj4rQT64uPNl0xsoMjUAoqKWr55kh82Zedq/uUFyOYeXaxoCg6LFngjOfjjzhMFqIQmIYjC7DI2ZiaNAMKEB9v5EpEsUvXYIGUJ5uWnuslHhZM8v4mXfg0uDNp7EFhwoWToUM6CXRpKeUQ3RmOnKI1oqbwe35vIMTRJwpZ697ZW2Fpjm/71fAEO/LxM0cEgxgXsXJJXzvHYjVNvmTtZMiibRMKv0IRlLci9w+5NHqaaPnnWMYChfaF3hFj+eyeEI8KlF+FNMFolaJyEUClzAVAGtx3WBAj8UvtrfjCgYaWoE6XXx92QZJvPUF+ooKkxl01RFCfE7//msZ+m6a/3QunM5FYHy7h8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let x86 track the number of address spaces on a per-VM basis so that KVM can disallow SMM memslots for confidential VMs. Confidentials VMs are fundamentally incompatible with emulating SMM, which as the name suggests requires being able to read and write guest memory and register state. Disallowing SMM will simplify support for guest private memory, as KVM will not need to worry about tracking memory attributes for multiple address spaces (SMM is the only "non-default" address space across all architectures). Signed-off-by: Sean Christopherson --- arch/powerpc/kvm/book3s_hv.c | 2 +- arch/x86/include/asm/kvm_host.h | 8 +++++++- arch/x86/kvm/debugfs.c | 2 +- arch/x86/kvm/mmu/mmu.c | 8 ++++---- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 17 +++++++++++------ virt/kvm/dirty_ring.c | 2 +- virt/kvm/kvm_main.c | 26 ++++++++++++++------------ 9 files changed, 41 insertions(+), 28 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 130bafdb1430..9b0eaa17275a 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -6084,7 +6084,7 @@ static int kvmhv_svm_off(struct kvm *kvm) } srcu_idx = srcu_read_lock(&kvm->srcu); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { struct kvm_memory_slot *memslot; struct kvm_memslots *slots = __kvm_memslots(kvm, i); int bkt; diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 78d641056ec5..44d67a97304e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2126,9 +2126,15 @@ enum { #define HF_SMM_MASK (1 << 1) #define HF_SMM_INSIDE_NMI_MASK (1 << 2) -# define KVM_ADDRESS_SPACE_NUM 2 +# define KVM_MAX_NR_ADDRESS_SPACES 2 # define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) # define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) + +static inline int kvm_arch_nr_memslot_as_ids(struct kvm *kvm) +{ + return KVM_MAX_NR_ADDRESS_SPACES; +} + #else # define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, 0) #endif diff --git a/arch/x86/kvm/debugfs.c b/arch/x86/kvm/debugfs.c index ee8c4c3496ed..42026b3f3ff3 100644 --- a/arch/x86/kvm/debugfs.c +++ b/arch/x86/kvm/debugfs.c @@ -111,7 +111,7 @@ static int kvm_mmu_rmaps_stat_show(struct seq_file *m, void *v) mutex_lock(&kvm->slots_lock); write_lock(&kvm->mmu_lock); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { int bkt; slots = __kvm_memslots(kvm, i); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 9b48d8d0300b..269d4dc47c98 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3755,7 +3755,7 @@ static int mmu_first_shadow_root_alloc(struct kvm *kvm) kvm_page_track_write_tracking_enabled(kvm)) goto out_success; - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { slots = __kvm_memslots(kvm, i); kvm_for_each_memslot(slot, bkt, slots) { /* @@ -6301,7 +6301,7 @@ static bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_e if (!kvm_memslots_have_rmaps(kvm)) return flush; - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { slots = __kvm_memslots(kvm, i); kvm_for_each_memslot_in_gfn_range(&iter, slots, gfn_start, gfn_end) { @@ -6341,7 +6341,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); if (tdp_mmu_enabled) { - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start, gfn_end, true, flush); } @@ -6802,7 +6802,7 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen) * modifier prior to checking for a wrap of the MMIO generation so * that a wrap in any address space is detected. */ - gen &= ~((u64)KVM_ADDRESS_SPACE_NUM - 1); + gen &= ~((u64)kvm_arch_nr_memslot_as_ids(kvm) - 1); /* * The very rare case: if the MMIO generation number has wrapped, diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 6c63f2d1675f..ca7ec39f17d3 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -905,7 +905,7 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) * is being destroyed or the userspace VMM has exited. In both cases, * KVM_RUN is unreachable, i.e. no vCPUs will ever service the request. */ - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { for_each_tdp_mmu_root_yield_safe(kvm, root, i) tdp_mmu_zap_root(kvm, root, false); } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ac36a5b7b5a3..f1da61236670 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12447,7 +12447,7 @@ void __user * __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, hva = slot->userspace_addr; } - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { struct kvm_userspace_memory_region2 m; m.slot = id | (i << 16); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index aea1b4306129..8c5c017ab4e9 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -80,8 +80,8 @@ /* Two fragments for cross MMIO pages. */ #define KVM_MAX_MMIO_FRAGMENTS 2 -#ifndef KVM_ADDRESS_SPACE_NUM -#define KVM_ADDRESS_SPACE_NUM 1 +#ifndef KVM_MAX_NR_ADDRESS_SPACES +#define KVM_MAX_NR_ADDRESS_SPACES 1 #endif /* @@ -692,7 +692,12 @@ bool kvm_arch_irqchip_in_kernel(struct kvm *kvm); #define KVM_MEM_SLOTS_NUM SHRT_MAX #define KVM_USER_MEM_SLOTS (KVM_MEM_SLOTS_NUM - KVM_INTERNAL_MEM_SLOTS) -#if KVM_ADDRESS_SPACE_NUM == 1 +#if KVM_MAX_NR_ADDRESS_SPACES == 1 +static inline int kvm_arch_nr_memslot_as_ids(struct kvm *kvm) +{ + return KVM_MAX_NR_ADDRESS_SPACES; +} + static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu) { return 0; @@ -747,9 +752,9 @@ struct kvm { struct mm_struct *mm; /* userspace tied to this vm */ unsigned long nr_memslot_pages; /* The two memslot sets - active and inactive (per address space) */ - struct kvm_memslots __memslots[KVM_ADDRESS_SPACE_NUM][2]; + struct kvm_memslots __memslots[KVM_MAX_NR_ADDRESS_SPACES][2]; /* The current active memslot set for each address space */ - struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; + struct kvm_memslots __rcu *memslots[KVM_MAX_NR_ADDRESS_SPACES]; struct xarray vcpu_array; /* * Protected by slots_lock, but can be read outside if an @@ -1018,7 +1023,7 @@ void kvm_put_kvm_no_destroy(struct kvm *kvm); static inline struct kvm_memslots *__kvm_memslots(struct kvm *kvm, int as_id) { - as_id = array_index_nospec(as_id, KVM_ADDRESS_SPACE_NUM); + as_id = array_index_nospec(as_id, KVM_MAX_NR_ADDRESS_SPACES); return srcu_dereference_check(kvm->memslots[as_id], &kvm->srcu, lockdep_is_held(&kvm->slots_lock) || !refcount_read(&kvm->users_count)); diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c index c1cd7dfe4a90..86d267db87bb 100644 --- a/virt/kvm/dirty_ring.c +++ b/virt/kvm/dirty_ring.c @@ -58,7 +58,7 @@ static void kvm_reset_dirty_gfn(struct kvm *kvm, u32 slot, u64 offset, u64 mask) as_id = slot >> 16; id = (u16)slot; - if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS) + if (as_id >= kvm_arch_nr_memslot_as_ids(kvm) || id >= KVM_USER_MEM_SLOTS) return; memslot = id_to_memslot(__kvm_memslots(kvm, as_id), id); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 68a6119e09e4..a83dfef1316e 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -615,7 +615,7 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, idx = srcu_read_lock(&kvm->srcu); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { struct interval_tree_node *node; slots = __kvm_memslots(kvm, i); @@ -1248,7 +1248,7 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) goto out_err_no_irq_srcu; refcount_set(&kvm->users_count, 1); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { for (j = 0; j < 2; j++) { slots = &kvm->__memslots[i][j]; @@ -1391,7 +1391,7 @@ static void kvm_destroy_vm(struct kvm *kvm) #endif kvm_arch_destroy_vm(kvm); kvm_destroy_devices(kvm); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { kvm_free_memslots(kvm, &kvm->__memslots[i][0]); kvm_free_memslots(kvm, &kvm->__memslots[i][1]); } @@ -1674,7 +1674,7 @@ static void kvm_swap_active_memslots(struct kvm *kvm, int as_id) * space 0 will use generations 0, 2, 4, ... while address space 1 will * use generations 1, 3, 5, ... */ - gen += KVM_ADDRESS_SPACE_NUM; + gen += kvm_arch_nr_memslot_as_ids(kvm); kvm_arch_memslots_updated(kvm, gen); @@ -2044,7 +2044,7 @@ int __kvm_set_memory_region(struct kvm *kvm, (mem->gmem_offset & (PAGE_SIZE - 1) || mem->gmem_offset + mem->memory_size < mem->gmem_offset)) return -EINVAL; - if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM) + if (as_id >= kvm_arch_nr_memslot_as_ids(kvm) || id >= KVM_MEM_SLOTS_NUM) return -EINVAL; if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr) return -EINVAL; @@ -2180,7 +2180,7 @@ int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, as_id = log->slot >> 16; id = (u16)log->slot; - if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS) + if (as_id >= kvm_arch_nr_memslot_as_ids(kvm) || id >= KVM_USER_MEM_SLOTS) return -EINVAL; slots = __kvm_memslots(kvm, as_id); @@ -2242,7 +2242,7 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) as_id = log->slot >> 16; id = (u16)log->slot; - if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS) + if (as_id >= kvm_arch_nr_memslot_as_ids(kvm) || id >= KVM_USER_MEM_SLOTS) return -EINVAL; slots = __kvm_memslots(kvm, as_id); @@ -2354,7 +2354,7 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, as_id = log->slot >> 16; id = (u16)log->slot; - if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS) + if (as_id >= kvm_arch_nr_memslot_as_ids(kvm) || id >= KVM_USER_MEM_SLOTS) return -EINVAL; if (log->first_page & 63) @@ -2494,7 +2494,7 @@ static __always_inline void kvm_handle_gfn_range(struct kvm *kvm, gfn_range.only_private = false; gfn_range.only_shared = false; - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { slots = __kvm_memslots(kvm, i); kvm_for_each_memslot_in_gfn_range(&iter, slots, range->start, range->end) { @@ -4833,9 +4833,11 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) case KVM_CAP_IRQ_ROUTING: return KVM_MAX_IRQ_ROUTES; #endif -#if KVM_ADDRESS_SPACE_NUM > 1 +#if KVM_MAX_NR_ADDRESS_SPACES > 1 case KVM_CAP_MULTI_ADDRESS_SPACE: - return KVM_ADDRESS_SPACE_NUM; + if (kvm) + return kvm_arch_nr_memslot_as_ids(kvm); + return KVM_MAX_NR_ADDRESS_SPACES; #endif case KVM_CAP_NR_MEMSLOTS: return KVM_USER_MEM_SLOTS; @@ -4939,7 +4941,7 @@ bool kvm_are_all_memslots_empty(struct kvm *kvm) lockdep_assert_held(&kvm->slots_lock); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { if (!kvm_memslots_empty(__kvm_memslots(kvm, i))) return false; }