From patchwork Fri Oct 27 18:22:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13438946 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E46DC25B72 for ; Fri, 27 Oct 2023 18:23:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D010F8001B; Fri, 27 Oct 2023 14:23:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C464C80018; Fri, 27 Oct 2023 14:23:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8ADC8001B; Fri, 27 Oct 2023 14:23:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9174C80018 for ; Fri, 27 Oct 2023 14:23:06 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 6609880CCB for ; Fri, 27 Oct 2023 18:23:06 +0000 (UTC) X-FDA: 81392063172.11.E607DC0 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf07.hostedemail.com (Postfix) with ESMTP id 834D240004 for ; Fri, 27 Oct 2023 18:23:04 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Fw6JjYLR; spf=pass (imf07.hostedemail.com: domain of 3BwA8ZQYKCCgWIERNGKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--seanjc.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3BwA8ZQYKCCgWIERNGKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--seanjc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698430984; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=X28f2VSZ9uWGkcVQfgyyaL29edWIecAA+BgvPUM1/nQ=; b=jovBoPbK5sfx5GhSqlpbNX+VbQ8DsRhkq24MZYDNGJx2M1f+FJQ7vSs01BFj83M5d2Jiwq MvJJ2DSQfXk5kgheLV7X6Ra57acszuArmZ7QPuKdkB/0muNohejvPbqQiUjKhJK/rsioTl hhNJeY6BOdyk805/8GUg0q6Nr3sC6D4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698430984; a=rsa-sha256; cv=none; b=Ys8tBMM4tWqV0O4iJ5qdWN3VUGTZxeRs3+JT+KbYHKiBPqk/azqQ93u0C6d0bBoTSoWEVZ 38BuxgYH1VEcmbPF9h0bmRtjM6bpAeYlPYWZtAoFz/it+uFFnREL2IDrndPFKVWgS4BRpN +menQpfHN2ipc3QTEHj9zmL27nkvhhw= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=Fw6JjYLR; spf=pass (imf07.hostedemail.com: domain of 3BwA8ZQYKCCgWIERNGKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--seanjc.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3BwA8ZQYKCCgWIERNGKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--seanjc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1cc1682607eso13654045ad.1 for ; Fri, 27 Oct 2023 11:23:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698430983; x=1699035783; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=X28f2VSZ9uWGkcVQfgyyaL29edWIecAA+BgvPUM1/nQ=; b=Fw6JjYLRaSFED4GHDozvd0QXdIk4pxhVXjdiWKbWFWuTrUUodcHOHV2tT9Dthxrie0 M0lKU7+PpnfFIU0cfE8kZ7F18WezAsAHNPcKhDyBQsm7scn2slZZGT0E7JgwLfS8gSk+ Ym7r2ztpT9dd+2fx+dricFB8mOqQGxMN94h5nREtgN3uuCyXIU+V6dDwBn2dR6SXTIp4 L4krbhYpew1BBQHAbKNGYrdEBH4W2bSUi8J6vCig69CC4hjQtz7uamUg9uHSfDE15twE 9fCGdpNDt2AozLnPE1KFnvraL6/h++VD0En8twjGkmEZi/5Fio8f0ej2rGtjDCLHXvEl 9f7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698430983; x=1699035783; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=X28f2VSZ9uWGkcVQfgyyaL29edWIecAA+BgvPUM1/nQ=; b=h22JIVS1XAaDWwG7v/HF/zNi/K0KSJFTKceLukbSmWaTbSrKG2V9WOfZJPvwOI8dJO HzllEEinObjfdaZ0p32Xy4a79RAqkckMQO+PzqeKTwiiCaYTAUCaTYkgNwQk1FvxJWCG 7Qlmbl6hesmZoRM2gRyKq60ptDk+yXrSkEfWbLIFTWOPjjwXEL8xK5BhaYYRdH3sj51S 7FfvV5BAiwYokFWv4eYWjdpFJZSzzfsChXZACWB7OmWXLRsdOrX4K+Raa0GcUqqzuwuy dFWbasBq+KkUCVfuSzijsXpE81ghTl8To4tv1XEtYqe+H/qNiJ8stcpWefHby5lR5as0 pDZg== X-Gm-Message-State: AOJu0Yx8wfTCkvR78mmx6alGjwnKzIOYrdcNb+C8a+xpEBaiZjZtlcQE HqWQIxeKHE+OATVLGcR1CexKICxX9cA= X-Google-Smtp-Source: AGHT+IERP39eT1EKIW0o0w4FIDOku618XMvV1N7KQCMi55hnYoVKlxflcru5UfA66foRhtHASvLd2ndsc5U= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:d4cc:b0:1cc:1900:28d7 with SMTP id o12-20020a170902d4cc00b001cc190028d7mr82404plg.12.1698430983359; Fri, 27 Oct 2023 11:23:03 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 27 Oct 2023 11:22:01 -0700 In-Reply-To: <20231027182217.3615211-1-seanjc@google.com> Mime-Version: 1.0 References: <20231027182217.3615211-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.820.g83a721a137-goog Message-ID: <20231027182217.3615211-20-seanjc@google.com> Subject: [PATCH v13 19/35] KVM: x86: Disallow hugepages when memory attributes are mixed From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Alexander Viro , Christian Brauner , "Matthew Wilcox (Oracle)" , Andrew Morton Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Xiaoyao Li , Xu Yilun , Chao Peng , Fuad Tabba , Jarkko Sakkinen , Anish Moorthy , David Matlack , Yu Zhang , Isaku Yamahata , " =?utf-8?q?Micka=C3=ABl_Sala?= =?utf-8?q?=C3=BCn?= " , Vlastimil Babka , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A . Shutemov" X-Stat-Signature: n3dbkifgty5gum9tig4ztm51hxxf4ubn X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 834D240004 X-Rspam-User: X-HE-Tag: 1698430984-11663 X-HE-Meta: U2FsdGVkX19KVDxi0D4IN1cgSLGrGW11/MCa7TMbwrVdlSy0zvcl29ISoPvPqufLh2z4ULkHv4ZARJEG8EqwRIDE7DI+AYfwDTyvSmS57KHQoEns0wEJ0vDdVY87e0bc0Quen8q0bBt8jBvwDY3l9qtwbWz+ZDckjrHvuyi3ljqcIN+4sZbq5CeX7WI8uj2XnvpP2hs8tykcRQ5OgDfW1EyTozzusamb7dZPjx2CUEhC1n/DOpucax98rws89jSuYqkSGcDDlnhnIdQAZ0iDjl/YbmtFLJ28gZYoMC2y44888l9CftHSymGtaWgwcZ+FgWtRwGx2OgOgIh77TniIbZir4qKjzMrFscduKq7CDm6Q+qXtzFUGV9C5StxZrxTj+xMUciN9athXkgQYP9Okn2xVdpFyL+SsS5aIbHvyer5NM8fJvxS+qe8ost6qVBaVlkmmPaWoPIp69cJnAaTxKJp9riLHtC+okTAAxEV9LNCRSpHQk/absyURiWRSTs41xkrzepNBcIWSTIzq/9RMsUAy6k6ftcEeLcORQE8q3X8b+6uIYBjbVYiPLGnuwjQ0tyBchSgMaF8PWciQV65kQyKytjy6eVn04CMfez9uqGOTNDmzryjiCWa9nac/KOTQ/gGW1Fjopq1xc5HfKhjL8yHMzQGrpwJVIoyMGgnjUYbYHHtIQF4gDBOfrcKONakrG6GtsOpWye6oNvZfSYVW8h0gvQP/ZhMlMRUcckUBQUa2iixRhROWFMzrPs9kJbwe8jQaJjylfz4YuHx3UP5bBr8n1UW77SIEhRc6cdkXy5KZXB8Q14XvS7AADwLiEtyicCnNqOTH6avewem6mw6mODQGeV0+Nn7Gj5qrlSfOO29yMTRRLBKEHBk3/gOJYK1T4Jwmn78mPZS700/HJjQnljk3hGeoG4Xdlk3vLLwowNebyxFmja9LSqxuaMqb4ivzGj6d26iJbnHU7lU/F1a wSKjIUwM B1gh+tcz3RH6tMbANGBJi7O+cEDdUkW40v8SjxAspEloanwjia/xlI8mQJGopzjGCqeGW+zw1IIGoHPUVawl3F3aGKiFDPNdL1EsoGKmLgHLFNHPOGq23QWaUq0oDd1ffB16hgGTSFCtv61K2wKwsIPXXNSuV7DxUag9PLqBuluLkJ1pHu9wqmcvfovQADOf1dg0w0NcTpusOd/eHIK3lxJhvPeaz2Z7bHcZxeoB98i5Pjs8QsAEL9HW1/6mJJqvNozSv08rv6jnquNAqLkzhzoZ5MoxxbfAd/uMGX+dNpjyGczXO1Dd6FsPMnEC3lFB3c46MixCfiRvpwBQiCzhbv+ywkQPeaTyLlLvW/w0oIB7si0F1uXGGXDgnHnNxoWi3yQ6DgUPv7UzcWGL2vqFlOKtgRHoEO1RcypMywAO1sS4dUVioGbPoPfb5r0zc3YijWWkZwVUdzFwplVV6SpOcg3oCGmvMtB/2FdWLGLsMeL3Fha9ZwJwP1PFENPkDtxdYC7qSeukpxvsYPNnB1/8z2NMICh21pxUAIvvK5tsjLu1U1BrWOOtNPnk0Xwa4Xv/Sm1XJjB8Nhgii96Jih221lDLeIhn2zqFR9rwSZOvf3gv+AAgKkDKUWLAWkGw7rO8qTQnkHUBotTqzCzxALFOwjPccJcNT1JPeEOafgbuVwiocbPtkDTe1EADvUl9o0bpvf3sFrICnKy1o0VhAkt2deMzs6BE4xjvyBwwI/fR+CqnHGxqREsE8OSKmNAcB5zKu1BCZniEbSRd6pQ38xD5Ne2CDQ4U+o5CG739Dux9qf8MGBy4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chao Peng Disallow creating hugepages with mixed memory attributes, e.g. shared versus private, as mapping a hugepage in this case would allow the guest to access memory with the wrong attributes, e.g. overlaying private memory with a shared hugepage. Tracking whether or not attributes are mixed via the existing disallow_lpage field, but use the most significant bit in 'disallow_lpage' to indicate a hugepage has mixed attributes instead using the normal refcounting. Whether or not attributes are mixed is binary; either they are or they aren't. Attempting to squeeze that info into the refcount is unnecessarily complex as it would require knowing the previous state of the mixed count when updating attributes. Using a flag means KVM just needs to ensure the current status is reflected in the memslots. Signed-off-by: Chao Peng Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 3 + arch/x86/kvm/mmu/mmu.c | 154 +++++++++++++++++++++++++++++++- arch/x86/kvm/x86.c | 4 + 3 files changed, 159 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 31e84668014e..8d60e4745e8b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1836,6 +1836,9 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu); void kvm_mmu_init_vm(struct kvm *kvm); void kvm_mmu_uninit_vm(struct kvm *kvm); +void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm, + struct kvm_memory_slot *slot); + void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu); void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); void kvm_mmu_slot_remove_write_access(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d33657d61d80..4167d557c577 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -795,16 +795,26 @@ static struct kvm_lpage_info *lpage_info_slot(gfn_t gfn, return &slot->arch.lpage_info[level - 2][idx]; } +/* + * The most significant bit in disallow_lpage tracks whether or not memory + * attributes are mixed, i.e. not identical for all gfns at the current level. + * The lower order bits are used to refcount other cases where a hugepage is + * disallowed, e.g. if KVM has shadow a page table at the gfn. + */ +#define KVM_LPAGE_MIXED_FLAG BIT(31) + static void update_gfn_disallow_lpage_count(const struct kvm_memory_slot *slot, gfn_t gfn, int count) { struct kvm_lpage_info *linfo; - int i; + int old, i; for (i = PG_LEVEL_2M; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { linfo = lpage_info_slot(gfn, slot, i); + + old = linfo->disallow_lpage; linfo->disallow_lpage += count; - WARN_ON_ONCE(linfo->disallow_lpage < 0); + WARN_ON_ONCE((old ^ linfo->disallow_lpage) & KVM_LPAGE_MIXED_FLAG); } } @@ -7161,3 +7171,143 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) if (kvm->arch.nx_huge_page_recovery_thread) kthread_stop(kvm->arch.nx_huge_page_recovery_thread); } + +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES +static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, + int level) +{ + return lpage_info_slot(gfn, slot, level)->disallow_lpage & KVM_LPAGE_MIXED_FLAG; +} + +static void hugepage_clear_mixed(struct kvm_memory_slot *slot, gfn_t gfn, + int level) +{ + lpage_info_slot(gfn, slot, level)->disallow_lpage &= ~KVM_LPAGE_MIXED_FLAG; +} + +static void hugepage_set_mixed(struct kvm_memory_slot *slot, gfn_t gfn, + int level) +{ + lpage_info_slot(gfn, slot, level)->disallow_lpage |= KVM_LPAGE_MIXED_FLAG; +} + +static bool hugepage_has_attrs(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, int level, unsigned long attrs) +{ + const unsigned long start = gfn; + const unsigned long end = start + KVM_PAGES_PER_HPAGE(level); + + if (level == PG_LEVEL_2M) + return kvm_range_has_memory_attributes(kvm, start, end, attrs); + + for (gfn = start; gfn < end; gfn += KVM_PAGES_PER_HPAGE(level - 1)) { + if (hugepage_test_mixed(slot, gfn, level - 1) || + attrs != kvm_get_memory_attributes(kvm, gfn)) + return false; + } + return true; +} + +bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range) +{ + unsigned long attrs = range->arg.attributes; + struct kvm_memory_slot *slot = range->slot; + int level; + + lockdep_assert_held_write(&kvm->mmu_lock); + lockdep_assert_held(&kvm->slots_lock); + + /* + * Calculate which ranges can be mapped with hugepages even if the slot + * can't map memory PRIVATE. KVM mustn't create a SHARED hugepage over + * a range that has PRIVATE GFNs, and conversely converting a range to + * SHARED may now allow hugepages. + */ + if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) + return false; + + /* + * The sequence matters here: upper levels consume the result of lower + * level's scanning. + */ + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { + gfn_t nr_pages = KVM_PAGES_PER_HPAGE(level); + gfn_t gfn = gfn_round_for_level(range->start, level); + + /* Process the head page if it straddles the range. */ + if (gfn != range->start || gfn + nr_pages > range->end) { + /* + * Skip mixed tracking if the aligned gfn isn't covered + * by the memslot, KVM can't use a hugepage due to the + * misaligned address regardless of memory attributes. + */ + if (gfn >= slot->base_gfn) { + if (hugepage_has_attrs(kvm, slot, gfn, level, attrs)) + hugepage_clear_mixed(slot, gfn, level); + else + hugepage_set_mixed(slot, gfn, level); + } + gfn += nr_pages; + } + + /* + * Pages entirely covered by the range are guaranteed to have + * only the attributes which were just set. + */ + for ( ; gfn + nr_pages <= range->end; gfn += nr_pages) + hugepage_clear_mixed(slot, gfn, level); + + /* + * Process the last tail page if it straddles the range and is + * contained by the memslot. Like the head page, KVM can't + * create a hugepage if the slot size is misaligned. + */ + if (gfn < range->end && + (gfn + nr_pages) <= (slot->base_gfn + slot->npages)) { + if (hugepage_has_attrs(kvm, slot, gfn, level, attrs)) + hugepage_clear_mixed(slot, gfn, level); + else + hugepage_set_mixed(slot, gfn, level); + } + } + return false; +} + +void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm, + struct kvm_memory_slot *slot) +{ + int level; + + if (!kvm_arch_has_private_mem(kvm)) + return; + + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { + /* + * Don't bother tracking mixed attributes for pages that can't + * be huge due to alignment, i.e. process only pages that are + * entirely contained by the memslot. + */ + gfn_t end = gfn_round_for_level(slot->base_gfn + slot->npages, level); + gfn_t start = gfn_round_for_level(slot->base_gfn, level); + gfn_t nr_pages = KVM_PAGES_PER_HPAGE(level); + gfn_t gfn; + + if (start < slot->base_gfn) + start += nr_pages; + + /* + * Unlike setting attributes, every potential hugepage needs to + * be manually checked as the attributes may already be mixed. + */ + for (gfn = start; gfn < end; gfn += nr_pages) { + unsigned long attrs = kvm_get_memory_attributes(kvm, gfn); + + if (hugepage_has_attrs(kvm, slot, gfn, level, attrs)) + hugepage_clear_mixed(slot, gfn, level); + else + hugepage_set_mixed(slot, gfn, level); + } + } +} +#endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f41dbb1465a0..824b58b44382 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12607,6 +12607,10 @@ static int kvm_alloc_memslot_metadata(struct kvm *kvm, } } +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES + kvm_mmu_init_memslot_memory_attributes(kvm, slot); +#endif + if (kvm_page_track_create_memslot(kvm, slot, npages)) goto out_free;