From patchwork Tue Jul 18 23:44:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13317878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEE42EB64DC for ; Tue, 18 Jul 2023 23:48:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 718EC8D0027; Tue, 18 Jul 2023 19:48:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6CB178D0012; Tue, 18 Jul 2023 19:48:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56B058D0027; Tue, 18 Jul 2023 19:48:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4298E8D0012 for ; Tue, 18 Jul 2023 19:48:52 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 172804050A for ; Tue, 18 Jul 2023 23:48:52 +0000 (UTC) X-FDA: 81026375304.26.9E2B718 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf26.hostedemail.com (Postfix) with ESMTP id 366FB140005 for ; Tue, 18 Jul 2023 23:48:49 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=IZaEPyOB; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 34CS3ZAYKCDMhTPcYRVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--seanjc.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=34CS3ZAYKCDMhTPcYRVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--seanjc.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689724130; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Fu4IZGb2U3Va9MKTtjfVBe1U+H33z+Vvz3vKzeN2toA=; b=rXZpF74d2f0tJ5nx4P1p9djsPY6CB2YY3Me0nOxfbHpfcuZkYANuriBqOJbjcQr9q0jlri t+Pu6AptG+DQMxv2L0ua2BJH72jlYAHlgTc/kezZNq+dUmiOgHMPZDi7k3v5xBZJt0yBsN Us2zGW1cCNRA4CtuCWFZi68DdtrkHHo= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=IZaEPyOB; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 34CS3ZAYKCDMhTPcYRVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--seanjc.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=34CS3ZAYKCDMhTPcYRVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--seanjc.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689724130; a=rsa-sha256; cv=none; b=1VFbQ2YC/214UNeIu9YxW+QMoGkjyIxWAnvnn4UZ78RzMcOwDfFxmGgHf8wa3rJH7K1vST o3QcSzkYtoQ1p3ZvCrTufryZpKAGbBYQyNRpoWyDaG/R1suG0aEQoA9mbqPJlMm9C137xK +A6Y6eaQT2M90uSykcrpMPg2EFNFQmo= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1b8b30f781cso32250995ad.2 for ; Tue, 18 Jul 2023 16:48:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1689724129; x=1692316129; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Fu4IZGb2U3Va9MKTtjfVBe1U+H33z+Vvz3vKzeN2toA=; b=IZaEPyOB4Xi+5QmwIHD8P7bc+aaX+suafWFmyT5uwQp3NUo61kUBrlISRhIplzjVEY uIBImmDPLupPxDXzTC/50r1tCCSeNlOhOVA9fS78zyP8h2biybmVrYzbOwK26JEGKHg7 I6sJKMWknlDa2y4/RkUGMNcU33LK92gZhfmmSlvWmT3ZRUCpnDAoSRUYtWBJR6q/98qc 7IYBfRSml8lO4T0/G64blb3o3A6eL3IHfZ2JdAaLZw/yFNgP47pFoelbwcMFEWdCuDgU rwttR7vXoy9ZQPZyTuXNE5j4H6jC+h7BIC2q5luL01pI3AyzFKBb/KLl+zHlx/O6BsOV E2/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689724129; x=1692316129; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Fu4IZGb2U3Va9MKTtjfVBe1U+H33z+Vvz3vKzeN2toA=; b=Tqlhg+gDUvCZjNd25pTrKWzwQGnVloAqZTuNzesjFwD9r4CAd7E8NaQCx3dDSh80/+ clCzEhO1WN4UGGddsbqNrg0QRXI1Q06aeD9lAICQL9ehgACXwBKuKQTBb/0hCySx8JUz IM3kYoPnr6WqpHH/EpoF8I2i5U63ZIuK8gx9oyB7yFgthlYiRV3hwU6Fc+tFJAGdogH0 cIkphNElrc9PvzVkOwrbgmo/mi/axQOHkm9MQuHqNDre/ktcckOBzF54oyi/iue7nY2B /b47ueo2HeLf9DSl0eMJrGgWhIiV2XO36+61QOicpgQ5l/5V0ed8pcT6TPX3whQfwcGk 0jlA== X-Gm-Message-State: ABy/qLYjn26fav7/RXyKj04Vh7h7e8KnucS0/dTMRDrRAV4uOHmZ1cP/ pKqDdAViph3J1eGBulhKCm/Xt23+cGo= X-Google-Smtp-Source: APBJJlHCRauC+c30Qnbz/xVE6cHMskqGKCbuMUu/5uTMRLvWSmi8GHs1UQtZYN/I3Q3rU07U5notqbe0Vyo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:41ca:b0:1b8:a54c:61ef with SMTP id u10-20020a17090341ca00b001b8a54c61efmr8486ple.9.1689724128966; Tue, 18 Jul 2023 16:48:48 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 18 Jul 2023 16:44:52 -0700 In-Reply-To: <20230718234512.1690985-1-seanjc@google.com> Mime-Version: 1.0 References: <20230718234512.1690985-1-seanjc@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog Message-ID: <20230718234512.1690985-10-seanjc@google.com> Subject: [RFC PATCH v11 09/29] KVM: x86: Disallow hugepages when memory attributes are mixed From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , "Matthew Wilcox (Oracle)" , Andrew Morton , Paul Moore , James Morris , "Serge E. Hallyn" Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-kernel@vger.kernel.org, Chao Peng , Fuad Tabba , Jarkko Sakkinen , Yu Zhang , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , Vlastimil Babka , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A . Shutemov" X-Rspam-User: X-Stat-Signature: awe34hamoqrkdrrfz5aqoe7k31dcctq4 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 366FB140005 X-HE-Tag: 1689724129-777707 X-HE-Meta: U2FsdGVkX1/IEVWxDbiBjHM6Issc0UWtdRYpVmue49OnWol4OWajmr0hJbfwIjJz5XnOlLvNDvDFXlUkil81gH2ifa1DO6EdAhKbTisxTzQpf+cRZX4Oe0FZT5fKSvYmBkCSRxTx3ZLlRFVuFzJ1DRZ7Wmuhip25J0ZD+vfdjhwe7AyrrdmH4tnohfnkcvyYsoFh3bR8QlNKPOVuqOSSvKrssydFLdAj2mf5Us//55Vd0o28mPy19wqe5iFdxj9UQf/bwu6hyCSKdJL6OuIaBhsnrRD4Ha8iueB7dZAUUZJMHsO+aq/vMqlMT+9OP/nIwdx7P2U3FSZQaGNVpsjOtwMTB/faOM0QPmp2C3957kTuN9T0NEGzWNmoQDwTRVXmj79hqrGLmXZpvqS+xpYBSijNda4q7Dc76DKgJhG4Z8p/5VQgsMRkuv5PEET7yYyMNUlq4DrEXyCzsqgFkFnMjR1MhHiGCm8jdY+oKkF6hc2SNL1AZhu+CDHZyWnFVd/De9JAB8946bzAKAjgnPycQBhkyqOmJ+izy7lhpVaIRmSD0YTPq1XntvAVRlbtbnUhQ75lZ1C89VjYTkQ1HmXnY7Q0pdDLsWBBviGRztEpwPW4P1IUN6uSzCydK8E/0ms/KXTv1MpbR6gNkb8Ck1MhEKN9AmsITazblFQJ5APHXmGJQfpHoVujeBv8F4SZKrzcKr2P7BY1LUMCuV3CBuGQMLP/PsCDH6bajhgHQlhSf0kD5H5H7PA1xUCXORnR0YACy+TuNLwKKfRITOmxmgoCJNjlwhod5V30BnxFGKZilq2TkZQGfnWZqLW2VfoHWSRaIb7lW6N8w+j1bW4GPe6G9cXfvg7plPfmmbP4aB9CJ+5XJmE9socgvXhjagjbcYA6z8g2lg/jfmKZGvUR5VSFUOnSaMrKx9LrqcpTZRRFQc9aqh3OVv9OVNBBGejvjVVUs62noqO5HUEZqm8Kb/K 8mgMbmuc AhhsnqGNIRLiuh+IJWnuB4T9YhkyVsBPQlk58NvyMT+cqvUIUepEJrhMGOUnDVUF40SwcKuElvmv9cwKBj8D5X0smezgaRardbmXp3Q0K1ld1l0dEGzj8WD5XvkcHmjeDVzBRw5nTZd2t+BUf/zrAOgeMOJ7CIgWeJf0vZwMgEsDbzgYxheWIwBttOdAVSOBn3rwTRdUW0v8JiHaxf+xhEXEGdYO19WNmPWk6HJwFbFJx9kbdFix5S8JGkDnCJW8toD8x7C1onW8oN74TIO1Nhdm+GbvrqE3r6OUuM7pjhKOz/0jxkOAVXwOTZKgdXo7jBXRXZpf5TI16MYjpn4bFqdaewjYxH1vAxIWVo8g5k5COCn3OxpZJNssqX4VYUdhUgQ4H1elNW9e/zGb2m1ktXpiWSHSN3zg++Ba8wSjtDviw3EQmPDAv+XTMx4frZamyJnl0ObaqRoDggJ4Q2ER7WRm1D4w/eBLrJfrO9GPPYDuQitIviYwzml6S3LFiGCd+9q66giIlgY1MAQ7qMJRYWniHiFUkS1Dvg7tHqAHzH4PAJjHNG8U0DwY5IAeJHAAUgro22Tsuo41t69vrRG7oSN2CbibgFrG/v0+xlOMv08CsMxLxmCciCnj4I4nCLq2JYrRuCRMqsQJFH3uSsk/Nn8bb5eF6tjQKBwZcC2DmdrXVcjdgszGYCe0vXhAs8S0/b9vpHgQI3hQZW/LLXYQ3mE4fVl6R+YijBIQLY/Ggz5MheLTnrhqF20KrBc+8p1GHa5FTlNP4POLWA/8qnLYnFWve1AqUOufGwFBeuAeAo502Tbw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Chao Peng Disallow creating hugepages with mixed memory attributes, e.g. shared versus private, as mapping a hugepage in this case would allow the guest to access memory with the wrong attributes, e.g. overlaying private memory with a shared hugepage. Tracking whether or not attributes are mixed via the existing disallow_lpage field, but use the most significant bit in 'disallow_lpage' to indicate a hugepage has mixed attributes instead using the normal refcounting. Whether or not attributes are mixed is binary; either they are or they aren't. Attempting to squeeze that info into the refcount is unnecessarily complex as it would require knowing the previous state of the mixed count when updating attributes. Using a flag means KVM just needs to ensure the current status is reflected in the memslots. Signed-off-by: Chao Peng Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 3 + arch/x86/kvm/mmu/mmu.c | 185 +++++++++++++++++++++++++++++++- arch/x86/kvm/x86.c | 4 + 3 files changed, 190 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f9a927296d85..b87ff7b601fa 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1816,6 +1816,9 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu); int kvm_mmu_init_vm(struct kvm *kvm); void kvm_mmu_uninit_vm(struct kvm *kvm); +void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm, + struct kvm_memory_slot *slot); + void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu); void kvm_mmu_reset_context(struct kvm_vcpu *vcpu); void kvm_mmu_slot_remove_write_access(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b034727c4cf9..aefe67185637 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -803,16 +803,27 @@ static struct kvm_lpage_info *lpage_info_slot(gfn_t gfn, return &slot->arch.lpage_info[level - 2][idx]; } +/* + * The most significant bit in disallow_lpage tracks whether or not memory + * attributes are mixed, i.e. not identical for all gfns at the current level. + * The lower order bits are used to refcount other cases where a hugepage is + * disallowed, e.g. if KVM has shadow a page table at the gfn. + */ +#define KVM_LPAGE_MIXED_FLAG BIT(31) + static void update_gfn_disallow_lpage_count(const struct kvm_memory_slot *slot, gfn_t gfn, int count) { struct kvm_lpage_info *linfo; - int i; + int old, i; for (i = PG_LEVEL_2M; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { linfo = lpage_info_slot(gfn, slot, i); + + old = linfo->disallow_lpage; linfo->disallow_lpage += count; - WARN_ON(linfo->disallow_lpage < 0); + + WARN_ON_ONCE((old ^ linfo->disallow_lpage) & KVM_LPAGE_MIXED_FLAG); } } @@ -7223,3 +7234,173 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) if (kvm->arch.nx_huge_page_recovery_thread) kthread_stop(kvm->arch.nx_huge_page_recovery_thread); } + +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES +static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, + int level) +{ + return lpage_info_slot(gfn, slot, level)->disallow_lpage & KVM_LPAGE_MIXED_FLAG; +} + +static void hugepage_clear_mixed(struct kvm_memory_slot *slot, gfn_t gfn, + int level) +{ + lpage_info_slot(gfn, slot, level)->disallow_lpage &= ~KVM_LPAGE_MIXED_FLAG; +} + +static void hugepage_set_mixed(struct kvm_memory_slot *slot, gfn_t gfn, + int level) +{ + lpage_info_slot(gfn, slot, level)->disallow_lpage |= KVM_LPAGE_MIXED_FLAG; +} + +static bool range_has_attrs(struct kvm *kvm, gfn_t start, gfn_t end, + unsigned long attrs) +{ + XA_STATE(xas, &kvm->mem_attr_array, start); + unsigned long index; + bool has_attrs; + void *entry; + + rcu_read_lock(); + + if (!attrs) { + has_attrs = !xas_find(&xas, end); + goto out; + } + + has_attrs = true; + for (index = start; index < end; index++) { + do { + entry = xas_next(&xas); + } while (xas_retry(&xas, entry)); + + if (xas.xa_index != index || xa_to_value(entry) != attrs) { + has_attrs = false; + break; + } + } + +out: + rcu_read_unlock(); + return has_attrs; +} + +static bool hugepage_has_attrs(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, int level, unsigned long attrs) +{ + const unsigned long start = gfn; + const unsigned long end = start + KVM_PAGES_PER_HPAGE(level); + + if (level == PG_LEVEL_2M) + return range_has_attrs(kvm, start, end, attrs); + + for (gfn = start; gfn < end; gfn += KVM_PAGES_PER_HPAGE(level - 1)) { + if (hugepage_test_mixed(slot, gfn, level - 1) || + attrs != kvm_get_memory_attributes(kvm, gfn)) + return false; + } + return true; +} + +bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, + struct kvm_gfn_range *range) +{ + unsigned long attrs = range->arg.attributes; + struct kvm_memory_slot *slot = range->slot; + int level; + + lockdep_assert_held_write(&kvm->mmu_lock); + lockdep_assert_held(&kvm->slots_lock); + + /* + * KVM x86 currently only supports KVM_MEMORY_ATTRIBUTE_PRIVATE, skip + * the slot if the slot will never consume the PRIVATE attribute. + */ + if (!kvm_slot_can_be_private(slot)) + return false; + + /* + * The sequence matters here: upper levels consume the result of lower + * level's scanning. + */ + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { + gfn_t nr_pages = KVM_PAGES_PER_HPAGE(level); + gfn_t gfn = gfn_round_for_level(range->start, level); + + /* Process the head page if it straddles the range. */ + if (gfn != range->start || gfn + nr_pages > range->end) { + /* + * Skip mixed tracking if the aligned gfn isn't covered + * by the memslot, KVM can't use a hugepage due to the + * misaligned address regardless of memory attributes. + */ + if (gfn >= slot->base_gfn) { + if (hugepage_has_attrs(kvm, slot, gfn, level, attrs)) + hugepage_clear_mixed(slot, gfn, level); + else + hugepage_set_mixed(slot, gfn, level); + } + gfn += nr_pages; + } + + /* + * Pages entirely covered by the range are guaranteed to have + * only the attributes which were just set. + */ + for ( ; gfn + nr_pages <= range->end; gfn += nr_pages) + hugepage_clear_mixed(slot, gfn, level); + + /* + * Process the last tail page if it straddles the range and is + * contained by the memslot. Like the head page, KVM can't + * create a hugepage if the slot size is misaligned. + */ + if (gfn < range->end && + (gfn + nr_pages) <= (slot->base_gfn + slot->npages)) { + if (hugepage_has_attrs(kvm, slot, gfn, level, attrs)) + hugepage_clear_mixed(slot, gfn, level); + else + hugepage_set_mixed(slot, gfn, level); + } + } + return false; +} + +void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm, + struct kvm_memory_slot *slot) +{ + int level; + + if (!kvm_slot_can_be_private(slot)) + return; + + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { + /* + * Don't bother tracking mixed attributes for pages that can't + * be huge due to alignment, i.e. process only pages that are + * entirely contained by the memslot. + */ + gfn_t end = gfn_round_for_level(slot->base_gfn + slot->npages, level); + gfn_t start = gfn_round_for_level(slot->base_gfn, level); + gfn_t nr_pages = KVM_PAGES_PER_HPAGE(level); + gfn_t gfn; + + if (start < slot->base_gfn) + start += nr_pages; + + /* + * Unlike setting attributes, every potential hugepage needs to + * be manually checked as the attributes may already be mixed. + */ + for (gfn = start; gfn < end; gfn += nr_pages) { + unsigned long attrs = kvm_get_memory_attributes(kvm, gfn); + + if (hugepage_has_attrs(kvm, slot, gfn, level, attrs)) + hugepage_clear_mixed(slot, gfn, level); + else + hugepage_set_mixed(slot, gfn, level); + } + } +} +#endif diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 92e77afd3ffd..dd7cefe78815 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12570,6 +12570,10 @@ static int kvm_alloc_memslot_metadata(struct kvm *kvm, } } +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES + kvm_mmu_init_memslot_memory_attributes(kvm, slot); +#endif + if (kvm_page_track_create_memslot(kvm, slot, npages)) goto out_free;