From patchwork Fri Jul 22 01:50:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12925869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 66EB5C43334 for ; Fri, 22 Jul 2022 01:52:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=nSphXslqJ5W44zv1Tf6daBoz5sv6MxxDvJvjnIHGOes=; b=j0gKbK7dzZ1YjM49C9H3NH2MCp lLk5Jh8X5OU7OwkcQRN7T4mLzI9QyUmJtzhjbiDF4k88mZUF0ewAkzc0rnTV1QUSv5hSf/x5fgOZG /IPs+6/uDdQRk/uonGrda0o9nDEwTRm9Lgx1f1PGxxPtRVXEt5H6o6l7F173+edsSZyn/Uo3wFdlF r+aFZmoJgjtQbrd/j7D8rWYp7ZuHGMebU+5+6tROG+4dnWytnOebsT9IPdS+O3lKNpdhEfC0vYn5D cpP3Xa9RTUToGCucCNsdwoJSfGiHmTyLLYHHTr6JURpDEKsykALMm678sXhqMHl3yjoGa49Hffy+q GIVYKRxw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhor-00Flt8-NQ; Fri, 22 Jul 2022 01:51:09 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhog-00FlV9-NI for linux-arm-kernel@lists.infradead.org; Fri, 22 Jul 2022 01:51:00 +0000 Received: by mail-pj1-x1049.google.com with SMTP id c12-20020a17090a8d0c00b001f20d603777so3584129pjo.4 for ; Thu, 21 Jul 2022 18:50:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=G41tbuvTTPW6zkdCe8wHBosjwKpQceeHJL92UYH85/M=; b=QDn39yK9q01HYmpkQV+8GM3LGuqP0vzNlq/pA6C4RzjaTLXTYUKRCzPRxsTRmYme18 J6Th6l8Hh3KTdoRnnNNZFRpB9KvAxCLk5fgCiBVGCgwto2MO1pgc3sBudTNCKENNGWP1 VIvGrSpV+jSq+bM5Ssh4cyjBYQ0lo2R/RT7xO63h5yUjZ3RvZeWksOA6w5dzgzavkDQT 2bVneD12EpTaWbEBDvhNkMuQdkXEI9AnO+xbHqwiLOx0apvVrbuvFVZf6fYoOw13Pekf pfgnrafAwwKJLGC/bBRuRiPrjKrDcivSZ0N8WQua9GCHGSs1+Cmvcm5jd+bWkomf5DLW /V+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=G41tbuvTTPW6zkdCe8wHBosjwKpQceeHJL92UYH85/M=; b=poWFINT/jGqiwJdf0AR+WkTAWCRVKVl6/g39XrTeWLUjvfCpldNyW5JKT77+7pclbu QHxLuFFiu55mwqa2YuD6kUTJVQzFNa+1JMIVyRbzGkcO/5FFIcKIm7FSO4FCM9BpvNvA iWPwMPPj2b9kclrIZrOlrCn5aLXPQ4JfAbV+sQ+53A/phC8tDvR4xI2ULM+FZ03cAD3X FmexTKoeRDGGKYeg04ECrOjcdy89xDc6ABkUtzwr2PMzggBZpivnLiW/ljg1KtYu0ZqY h4mXggVIb3kCD/6Eh84c5D6rkIWaWZslqgP0jLTkVvH++kWqIcRq2G/Kaye4XIxO9Tx8 uKzw== X-Gm-Message-State: AJIora8RqqdpCcUVXwtrFMrghRgPgaHHJCJZ7rdYJPlK1v9aY1J5dK1L 8TKzRazrNUxllRRjwmJdpfXJy0J13RVtzMPk6CK1EwpCURQ513ANrk4NK904Dvz52mDgYWRRNRv hfGAOI0UUmNcuLLelmDSImGtxSHY4hR/UPwEIdprE2j4hBUDwnLxaugG5GfEhn9cpGiAWO7Iz X-Google-Smtp-Source: AGRyM1ubxDRuTW9mlC85urAd+IAaVp2dWj91vy347i4RKelsCRO4mwADKJ+xjoFdUND5bBdmTHbxA+k= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:7ed4:5864:d5e1:ffe1]) (user=pcc job=sendgmr) by 2002:a17:90a:49c4:b0:1f2:2448:60e9 with SMTP id l4-20020a17090a49c400b001f2244860e9mr1452545pjm.148.1658454656955; Thu, 21 Jul 2022 18:50:56 -0700 (PDT) Date: Thu, 21 Jul 2022 18:50:27 -0700 In-Reply-To: <20220722015034.809663-1-pcc@google.com> Message-Id: <20220722015034.809663-2-pcc@google.com> Mime-Version: 1.0 References: <20220722015034.809663-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.359.gd136c6c3e2-goog Subject: [PATCH v2 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220721_185058_794774_14E454A1 X-CRM114-Status: GOOD ( 30.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Currently the PG_mte_tagged page flag mostly means the page contains valid tags and it should be set after the tags have been cleared or restored. However, in mte_sync_tags() it is set before setting the tags to avoid, in theory, a race with concurrent mprotect(PROT_MTE) for shared pages. However, a concurrent mprotect(PROT_MTE) with a copy on write in another thread can cause the new page to have stale tags. Similarly, tag reading via ptrace() can read stale tags of the PG_mte_tagged flag is set before actually clearing/restoring the tags. Fix the PG_mte_tagged semantics so that it is only set after the tags have been cleared or restored. This is safe for swap restoring into a MAP_SHARED or CoW page since the core code takes the page lock. Add two functions to test and set the PG_mte_tagged flag with acquire and release semantics. The downside is that concurrent mprotect(PROT_MTE) on a MAP_SHARED page may cause tag loss. This is already the case for KVM guests if a VMM changes the page protection while the guest triggers a user_mem_abort(). Signed-off-by: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Steven Price Cc: Peter Collingbourne Reported-by: kernel test robot Reported-by: kernel test robot --- arch/arm64/include/asm/mte.h | 30 ++++++++++++++++++++++++++++++ arch/arm64/include/asm/pgtable.h | 2 +- arch/arm64/kernel/cpufeature.c | 4 +++- arch/arm64/kernel/elfcore.c | 2 +- arch/arm64/kernel/hibernate.c | 2 +- arch/arm64/kernel/mte.c | 12 +++++++----- arch/arm64/kvm/guest.c | 4 ++-- arch/arm64/kvm/mmu.c | 4 ++-- arch/arm64/mm/copypage.c | 4 ++-- arch/arm64/mm/fault.c | 2 +- arch/arm64/mm/mteswap.c | 2 +- 11 files changed, 51 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index aa523591a44e..c69218c56980 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -37,6 +37,29 @@ void mte_free_tag_storage(char *storage); /* track which pages have valid allocation tags */ #define PG_mte_tagged PG_arch_2 +static inline void set_page_mte_tagged(struct page *page) +{ + /* + * Ensure that the tags written prior to this function are visible + * before the page flags update. + */ + smp_wmb(); + set_bit(PG_mte_tagged, &page->flags); +} + +static inline bool page_mte_tagged(struct page *page) +{ + bool ret = test_bit(PG_mte_tagged, &page->flags); + + /* + * If the page is tagged, ensure ordering with a likely subsequent + * read of the tags. + */ + if (ret) + smp_rmb(); + return ret; +} + void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t old_pte, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); @@ -54,6 +77,13 @@ size_t mte_probe_user_range(const char __user *uaddr, size_t size); /* unused if !CONFIG_ARM64_MTE, silence the compiler */ #define PG_mte_tagged 0 +static inline set_page_mte_tagged(struct page *page) +{ +} +static inline bool page_mte_tagged(struct page *page) +{ + return false; +} static inline void mte_zero_clear_page_tags(void *addr) { } diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b5df82aa99e6..82719fa42c0e 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1050,7 +1050,7 @@ static inline void arch_swap_invalidate_area(int type) static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { if (system_supports_mte() && mte_restore_tags(entry, &folio->page)) - set_bit(PG_mte_tagged, &folio->flags); + set_page_mte_tagged(&folio->page); } #endif /* CONFIG_ARM64_MTE */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index fae4c7a785d8..c66f0ffaaf47 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2020,8 +2020,10 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) * Clear the tags in the zero page. This needs to be done via the * linear map which has the Tagged attribute. */ - if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags)) + if (!page_mte_tagged(ZERO_PAGE(0))) { mte_clear_page_tags(lm_alias(empty_zero_page)); + set_page_mte_tagged(ZERO_PAGE(0)); + } kasan_init_hw_tags_cpu(); } diff --git a/arch/arm64/kernel/elfcore.c b/arch/arm64/kernel/elfcore.c index 27ef7ad3ffd2..353009d7f307 100644 --- a/arch/arm64/kernel/elfcore.c +++ b/arch/arm64/kernel/elfcore.c @@ -47,7 +47,7 @@ static int mte_dump_tag_range(struct coredump_params *cprm, * Pages mapped in user space as !pte_access_permitted() (e.g. * PROT_EXEC only) may not have the PG_mte_tagged flag set. */ - if (!test_bit(PG_mte_tagged, &page->flags)) { + if (!page_mte_tagged(page)) { put_page(page); dump_skip(cprm, MTE_PAGE_TAG_STORAGE); continue; diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index af5df48ba915..788597a6b6a2 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -271,7 +271,7 @@ static int swsusp_mte_save_tags(void) if (!page) continue; - if (!test_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) continue; ret = save_tags(page, pfn); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index b2b730233274..2287316639f3 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -41,14 +41,17 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (check_swap && is_swap_pte(old_pte)) { swp_entry_t entry = pte_to_swp_entry(old_pte); - if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) + if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) { + set_page_mte_tagged(page); return; + } } if (!pte_is_tagged) return; mte_clear_page_tags(page_address(page)); + set_page_mte_tagged(page); } void mte_sync_tags(pte_t old_pte, pte_t pte) @@ -64,7 +67,7 @@ void mte_sync_tags(pte_t old_pte, pte_t pte) /* if PG_mte_tagged is set, tags have already been initialised */ for (i = 0; i < nr_pages; i++, page++) { - if (!test_and_set_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) mte_sync_page_tags(page, old_pte, check_swap, pte_is_tagged); } @@ -91,8 +94,7 @@ int memcmp_pages(struct page *page1, struct page *page2) * pages is tagged, set_pte_at() may zero or change the tags of the * other page via mte_sync_tags(). */ - if (test_bit(PG_mte_tagged, &page1->flags) || - test_bit(PG_mte_tagged, &page2->flags)) + if (page_mte_tagged(page1) || page_mte_tagged(page2)) return addr1 != addr2; return ret; @@ -398,7 +400,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, put_page(page); break; } - WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags)); + WARN_ON_ONCE(!page_mte_tagged(page)); /* limit access to the end of the page */ offset = offset_in_page(addr); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 8c607199cad1..3b04e69006b4 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1058,7 +1058,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, maddr = page_address(page); if (!write) { - if (test_bit(PG_mte_tagged, &page->flags)) + if (page_mte_tagged(page)) num_tags = mte_copy_tags_to_user(tags, maddr, MTE_GRANULES_PER_PAGE); else @@ -1075,7 +1075,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, * completed fully */ if (num_tags == MTE_GRANULES_PER_PAGE) - set_bit(PG_mte_tagged, &page->flags); + set_page_mte_tagged(page); kvm_release_pfn_dirty(pfn); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 87f1cd0df36e..c9012707f69c 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1075,9 +1075,9 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, return -EFAULT; for (i = 0; i < nr_pages; i++, page++) { - if (!test_bit(PG_mte_tagged, &page->flags)) { + if (!page_mte_tagged(page)) { mte_clear_page_tags(page_address(page)); - set_bit(PG_mte_tagged, &page->flags); + set_page_mte_tagged(page); } } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 24913271e898..4223389b6180 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -21,9 +21,9 @@ void copy_highpage(struct page *to, struct page *from) copy_page(kto, kfrom); - if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { - set_bit(PG_mte_tagged, &to->flags); + if (system_supports_mte() && page_mte_tagged(from)) { mte_copy_page_tags(kto, kfrom); + set_page_mte_tagged(to); } } EXPORT_SYMBOL(copy_highpage); diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index c33f1fad2745..d095bfa16771 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -931,5 +931,5 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, void tag_clear_highpage(struct page *page) { mte_zero_clear_page_tags(page_address(page)); - set_bit(PG_mte_tagged, &page->flags); + set_page_mte_tagged(page); } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index 4334dec93bd4..a78c1db23c68 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -24,7 +24,7 @@ int mte_save_tags(struct page *page) { void *tag_storage, *ret; - if (!test_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) return 0; tag_storage = mte_allocate_tag_storage(); From patchwork Fri Jul 22 01:50:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12925870 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5F59BCCA485 for ; Fri, 22 Jul 2022 01:52:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=s970yZLIC1GFEaSjBX901BwpKgghJLVqg5N6D78Vf7k=; b=av1Ci2DzYT+cuRxceH9SVp4RXK f2XylHmYt/FyXp7lXEmRFPPCne91jknmXzeW1AQKZECgv6AuwyVx+3mowr79gO0zhUvoX8wmYh2lz jzVqFJFdNYH0JkvckOYoJUGDyZupN3pPpIy8hB7HejnOh8B7cYZfCHxSbubvFDo4qFKPSXNLzNiHA hdN+ld5laQKhNSuVwD0POGEBhZVgRpY/v9bZpijMqMpnqmJPlEotsw9trVFsWXKCF0LKtYIFCC8Kc KCr1mDSdH6ynkvZdCVmKn1ifznXRGYiFxhzrmMQuofg6vJNLD6mIOFk4hd++txaRMxTwJaD2yA/QW 9+LAgPBQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhp1-00FmMo-SK; Fri, 22 Jul 2022 01:51:20 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhoj-00FlbO-EQ for linux-arm-kernel@lists.infradead.org; Fri, 22 Jul 2022 01:51:02 +0000 Received: by mail-pf1-x44a.google.com with SMTP id y10-20020a056a00180a00b0052b10093100so1252289pfa.4 for ; Thu, 21 Jul 2022 18:51:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=4BZROGdIJ6ey3ykeGXKKMyG5fxGLOs9dtND9v0m18DI=; b=MwDh0r7e9Z50plp25nl4fU2XJ0ebh5++1X9yJJYV97pjuLK0ahyEY3Y8MqgsKA30wo rDCddGIzUUQ2BQH+lTn83Rja5L3SCm6AZpaWt5SXKfXP8DCl4YIJKMCTCFvVTq2vvadH PuySAk39zirowz3Gyd8+qtVDsZXRxjmr6MpDmamAqsuZex6XJHHbyOA48OK2baHJCwEn 83i97+hNJ7QVpTrzc3m5GFh2VtOAQbiLIxElelsydLR0K+KYHRweZq3CPpJI1rDgH0cV kh6oM05Ggd5zJVKnTVDWLz6qVwtNJZY96QC2Cf+7mEoOOYjlbzYJP+Z2lxv1xHyYq7Ol t5jA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4BZROGdIJ6ey3ykeGXKKMyG5fxGLOs9dtND9v0m18DI=; b=EkScJp1SX3UNXY8IPvP8KfVXikjw54AfW1onpwCNbx7KDmiFZ2ftBq3oH4BOCWWjgn 4rPS6o6XOtxdRKYVi+1YACsQq7M0JCK+oZGQmdceFC+MySx7TltcHUkivHjmr1Zs3Cu6 VpMGB2107Q+buYwgeY5ZdwlhpA1hXdngQO0/Jvt8ylZN7mwX5P89T2vNp1LKcDLKX+OH /c7U3VBq0w7Xht+Ia0fNOymlqkc7P0Smo9wONVKAbRqSEzKCMhRtxFS4cxeDBtknauQh lR18vNlc6GYWJybQf8SfKxooKEsI1HMCTEJ+zPijrmibgTt2GKEKhd1+xCXvTfCv8Q36 KAsQ== X-Gm-Message-State: AJIora+SmMxIoq8LaW6BgvrnZrPqdJnozbsFg8EaN+1mB+1LGqnIe5SP Ts+1Hleypv2xNWC8HeUeeikt6/R2qIsSu26ZQYrYIw9H4mNXrkHwgnR6slHfxw8HOUsm49GPXl7 7ZGf6r5dkZgJTo8xxuyAJ1Avkdiz3eEIJLJcKWufwaXtrsjBa8ISIjMwKNuv4S7ZXa8eP3ZKC X-Google-Smtp-Source: AGRyM1vbtFVBRffsajgh0wYnjlyqGlcLkaXaKkTOQVsukeXgRgms6r3xxeGo4tZIWhNUOJdgZJEPUcI= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:7ed4:5864:d5e1:ffe1]) (user=pcc job=sendgmr) by 2002:a17:90a:69e1:b0:1f2:2c0a:c30f with SMTP id s88-20020a17090a69e100b001f22c0ac30fmr7330979pjj.8.1658454659739; Thu, 21 Jul 2022 18:50:59 -0700 (PDT) Date: Thu, 21 Jul 2022 18:50:28 -0700 In-Reply-To: <20220722015034.809663-1-pcc@google.com> Message-Id: <20220722015034.809663-3-pcc@google.com> Mime-Version: 1.0 References: <20220722015034.809663-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.359.gd136c6c3e2-goog Subject: [PATCH v2 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220721_185101_526255_94801AC4 X-CRM114-Status: GOOD ( 22.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Currently sanitise_mte_tags() checks if it's an online page before attempting to sanitise the tags. Such detection should be done in the caller via the VM_MTE_ALLOWED vma flag. Since kvm_set_spte_gfn() does not have the vma, leave the page unmapped if not already tagged. Tag initialisation will be done on a subsequent access fault in user_mem_abort(). Signed-off-by: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Steven Price Cc: Peter Collingbourne --- arch/arm64/kvm/mmu.c | 40 +++++++++++++++------------------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9012707f69c..1a3707aeb41f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1056,23 +1056,14 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) * - mmap_lock protects between a VM faulting a page in and the VMM performing * an mprotect() to add VM_MTE */ -static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, - unsigned long size) +static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, + unsigned long size) { unsigned long i, nr_pages = size >> PAGE_SHIFT; - struct page *page; + struct page *page = pfn_to_page(pfn); if (!kvm_has_mte(kvm)) - return 0; - - /* - * pfn_to_online_page() is used to reject ZONE_DEVICE pages - * that may not support tags. - */ - page = pfn_to_online_page(pfn); - - if (!page) - return -EFAULT; + return; for (i = 0; i < nr_pages; i++, page++) { if (!page_mte_tagged(page)) { @@ -1080,8 +1071,6 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, set_page_mte_tagged(page); } } - - return 0; } static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, @@ -1092,7 +1081,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool write_fault, writable, force_pte = false; bool exec_fault; bool device = false; - bool shared; unsigned long mmu_seq; struct kvm *kvm = vcpu->kvm; struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; @@ -1142,8 +1130,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, vma_shift = get_vma_page_shift(vma, hva); } - shared = (vma->vm_flags & VM_SHARED); - switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: @@ -1264,12 +1250,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if (!shared) - ret = sanitise_mte_tags(kvm, pfn, vma_pagesize); - else + if ((vma->vm_flags & VM_MTE_ALLOWED) && + !(vma->vm_flags & VM_SHARED)) { + sanitise_mte_tags(kvm, pfn, vma_pagesize); + } else { ret = -EFAULT; - if (ret) goto out_unlock; + } } if (writable) @@ -1491,15 +1478,18 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { kvm_pfn_t pfn = pte_pfn(range->pte); - int ret; if (!kvm->arch.mmu.pgt) return false; WARN_ON(range->end - range->start != 1); - ret = sanitise_mte_tags(kvm, pfn, PAGE_SIZE); - if (ret) + /* + * If the page isn't tagged, defer to user_mem_abort() for sanitising + * the MTE tags. The S2 pte should have been unmapped by + * mmu_notifier_invalidate_range_end(). + */ + if (kvm_has_mte(kvm) && !page_mte_tagged(pfn_to_page(pfn))) return false; /* From patchwork Fri Jul 22 01:50:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12925872 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7CC1C43334 for ; Fri, 22 Jul 2022 01:52:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=RoAsTmmy2FPEy6yW2n02jth7n2suludXa011mD6PFJQ=; b=QIru8o2Xa3Q4+8KHFeeWxHZvO+ rTWCtK3r8cjyy+2x7Hk+AQ9myYXJ6cBesT6V7Oc2HighpsvjUUFFsiKt5KxMTF3y5DWaBRBOGxBjG traqvtbDH4LPQPcvVCh2IYlV8DvV79QpA0d9O2PbvHk+8RzuKhGQWk1D1Toz6XBNPkvKpbKmCOvgF F09v1h/a4TGMi6YIRosWXrcAfbbVKUYDI4dHGveP/knB0uJZAa5F91PUJITgPZx45rqX7O322mjt5 lsfnyaSc68+i/EkkZigo817u2QyAZOd9k79JnUQuBo3k74CseYoWGN7768qS4023k535VscMFaRlv FO4U8Z1g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhpC-00FmgS-4P; Fri, 22 Jul 2022 01:51:30 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhom-00Flgm-Ko for linux-arm-kernel@lists.infradead.org; Fri, 22 Jul 2022 01:51:06 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31e62bc916aso28519477b3.19 for ; Thu, 21 Jul 2022 18:51:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=TwIGZxOgt6rBe9wTaWtA8OadqnubKpslEOefQ0KlBYc=; b=NkaUVQiJc5jp+Kd2jx7nqyeLgTuFuMR+XL7JQsjYclkpdvHDVQSs++oKJUciE3moo+ ++QPlhiRnCM5hnKWWKwiBeCGYNzfZRM70Zb8d/mMX9XxKQqkFWrFDMZRSYO/twzB9DN0 tUhyOaKS+2VpTg2zP8AIXPINcgEvdnjIZfzofEXHRALLETPHGbIv2z7g7sbF89BagsaZ b/KrKcET0FOT7cYFKscqtzCKR+RGicAklN0CDyJtWmUdIv3BlWspqKD84OugLk/Qv9tT Dz4fo2RX85w/ryKRTsUDeumOGudpVjHMJzJIinDSj8vhiCxEZz5hgyjnyNLfK01rFbYu Z+iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TwIGZxOgt6rBe9wTaWtA8OadqnubKpslEOefQ0KlBYc=; b=VivPBVdXayc/AKMwkHC4v93CZKNerDh55pa/MWbftHGJ3VPJn83czOcP9rBSQ0MRvs 9n0p1Jw1aaVMWAPcyvWu13LaUOw1TFF664vgPrwICySUhZhlLtCT8xrSePpfy7dluWxr LBMoGJJti5yuELRBuzq2rWKFxY+EyiEvSOtsbdHinrPf3P4mJWAxKzqkpQDquESnVFfD ZYIe/8iKeZyXunKWQNQG7eRWPN41ESfAL/pxsNmN7h64OqyAaGGacR1CNfxqrKCBJpp9 eNdH/rL1XMiqZJqB1mDxGpItUSJNQw5lKi1xwP7NGW/Lbd56oHT/JQI5QI0KHIGaWIN1 wTtg== X-Gm-Message-State: AJIora9nPtDyquX0ECNY3gMYqnXpG7SYkHj1JioC7hgdN70MK0ZbI/j1 Ttf+S1x/3vRGu3E477/eeOO2wOO7cTKK+Qilgc0BDQmGm39m4yvsci1MwXSu446xqswJex3k5ZC 2ixcgHhLVIx1xvnBstZOGGFauklBcakS7jLLY24omqdrl028l46CqVeIkfulZ0ApLbAV4z3qj X-Google-Smtp-Source: AGRyM1v5HWTX6JYtiwrUXhpyDwH9QneRyI4FxOnAArRXJ2u1OihyxjWyM/5NJuRzTZObSJn7lnprhPc= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:7ed4:5864:d5e1:ffe1]) (user=pcc job=sendgmr) by 2002:a25:d512:0:b0:670:9301:a997 with SMTP id r18-20020a25d512000000b006709301a997mr1176046ybe.351.1658454662450; Thu, 21 Jul 2022 18:51:02 -0700 (PDT) Date: Thu, 21 Jul 2022 18:50:29 -0700 In-Reply-To: <20220722015034.809663-1-pcc@google.com> Message-Id: <20220722015034.809663-4-pcc@google.com> Mime-Version: 1.0 References: <20220722015034.809663-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.359.gd136c6c3e2-goog Subject: [PATCH v2 3/7] mm: Add PG_arch_3 page flag From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220721_185104_703510_235EB565 X-CRM114-Status: GOOD ( 12.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org As with PG_arch_2, this flag is only allowed on 64-bit architectures due to the shortage of bits available. It will be used by the arm64 MTE code in subsequent patches. Signed-off-by: Peter Collingbourne Cc: Will Deacon Cc: Marc Zyngier Cc: Steven Price [catalin.marinas@arm.com: added flag preserving in __split_huge_page_tail()] Signed-off-by: Catalin Marinas --- fs/proc/page.c | 1 + include/linux/page-flags.h | 1 + include/trace/events/mmflags.h | 7 ++++--- mm/huge_memory.c | 1 + 4 files changed, 7 insertions(+), 3 deletions(-) diff --git a/fs/proc/page.c b/fs/proc/page.c index a2873a617ae8..438b8aa7249d 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -220,6 +220,7 @@ u64 stable_page_flags(struct page *page) u |= kpf_copy_bit(k, KPF_ARCH, PG_arch_1); #ifdef CONFIG_64BIT u |= kpf_copy_bit(k, KPF_ARCH_2, PG_arch_2); + u |= kpf_copy_bit(k, KPF_ARCH_2, PG_arch_3); #endif return u; diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 465ff35a8c00..ad01a3abf6c8 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -134,6 +134,7 @@ enum pageflags { #endif #ifdef CONFIG_64BIT PG_arch_2, + PG_arch_3, #endif #ifdef CONFIG_KASAN_HW_TAGS PG_skip_kasan_poison, diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 11524cda4a95..704380179986 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -91,9 +91,9 @@ #endif #ifdef CONFIG_64BIT -#define IF_HAVE_PG_ARCH_2(flag,string) ,{1UL << flag, string} +#define IF_HAVE_PG_ARCH_2_3(flag,string) ,{1UL << flag, string} #else -#define IF_HAVE_PG_ARCH_2(flag,string) +#define IF_HAVE_PG_ARCH_2_3(flag,string) #endif #ifdef CONFIG_KASAN_HW_TAGS @@ -129,7 +129,8 @@ IF_HAVE_PG_UNCACHED(PG_uncached, "uncached" ) \ IF_HAVE_PG_HWPOISON(PG_hwpoison, "hwpoison" ) \ IF_HAVE_PG_IDLE(PG_young, "young" ) \ IF_HAVE_PG_IDLE(PG_idle, "idle" ) \ -IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) \ +IF_HAVE_PG_ARCH_2_3(PG_arch_2, "arch_2" ) \ +IF_HAVE_PG_ARCH_2_3(PG_arch_3, "arch_3" ) \ IF_HAVE_PG_SKIP_KASAN_POISON(PG_skip_kasan_poison, "skip_kasan_poison") #define show_page_flags(flags) \ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8320874901f1..d6e8789e9ebb 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2399,6 +2399,7 @@ static void __split_huge_page_tail(struct page *head, int tail, (1L << PG_unevictable) | #ifdef CONFIG_64BIT (1L << PG_arch_2) | + (1L << PG_arch_3) | #endif (1L << PG_dirty))); From patchwork Fri Jul 22 01:50:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12925873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0962FC43334 for ; Fri, 22 Jul 2022 01:52:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=dwYZZWl26fZk0wN7uyVGBewc7ISYQGGFbwpqnbeb67M=; b=i9icQAHiqyQERddvVYH1qbjyXY kOe9TvSxTkTt/N6sKS9QO3wlwBn0KrQpq44KoE91dHfmMwH86hcNZ0zbW8wgIm3jT2BlE/i/CGcH3 vx+jquUgcPYkHE24QiyPknxwm6oBKkcbExPPN9aHDi0SoEAlU4GKbNpcuzJy/UucE7UrWQywYbHrF +Z7Bw57trlxxCjU6k5mdnMIh67M44/S+qa1Zd13EVbB4jviOPPB+0ZN7C1vtpc3eMi8Vsv07BHsE9 nGgC5G2jOZQR2nViHVPBsR7SqExsVsg8iAAtq25/ZzNKUHKszKbBcuYit8csvP3CIMvXJtEKtiuR/ B6bEve0g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhpR-00FnF3-L6; Fri, 22 Jul 2022 01:51:45 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhop-00FlnJ-48 for linux-arm-kernel@lists.infradead.org; Fri, 22 Jul 2022 01:51:09 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id n192-20020a2540c9000000b0066fca45513eso2630405yba.0 for ; Thu, 21 Jul 2022 18:51:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=k9HnMXUCeuGDqS8b473pt3twSYHG3RwHkZdL85ZMiGU=; b=HVnCwooJnMCJZaj1v28UbTZmc/ntcmSfrVpH4yB42SA6KpjbhZnQvjtzu+WCHB+hK8 wJqnUKi0TMmWLxQeb3gpqKS3SIohINGpFvz+6AmscMlFpsAIYX1Szk5kS85vIwm4cUHv 8aNxmmZgpAMS/0UxoANDB2ZrYiGxwf/r0d9zmkr2eF8AHX55KNimrfVqu/hkgjA4likh LdsMc0xUXWqK9gNfbtEpWr8UOpxtDvSO8vfrDCjIdzWDIKArkcdnVFcTYfvwYVqzxOcS bL3dCym6ThEzJP0A3L6HHVMWfcgmXPfMKSK88u/WvQykF43FkLzBwo83oQLrPto1mvD+ cH7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=k9HnMXUCeuGDqS8b473pt3twSYHG3RwHkZdL85ZMiGU=; b=YQfS9xmQyqits1o0wZdCNCL/jvUoLUW0GHdF3vYj4/JqGkWhSPzL3MC9IwlYMuSv0o FwYyjlTUCQACBal5R38WLq1EW5R49Fcn7nE+58TgzW4O8c3mh+PNYeM+h/DotbOls/50 lge0wF7/jj4pQqN+G2nnXvvDy+n7q26+sysutMjKc9UzP7MaapZMghAMf933fzDcaMNz Esodc6Tx2Wm8P5LHYTeSvTzTy1h5G5sJjsG74rNnqPZXBtxRfkqwCJnGx7+gWbfSwBQe fwgE1V9+DZoZyqhP3RZHXY0MdLVNRXoWC37XkMweIO6tgQnYsm0JxHZF5+FjzzXzXD4l RLBw== X-Gm-Message-State: AJIora8hMTxTW/JmxhtUDxy9ANUPyo8ovgAiCcgTUESPNh+S2952utTY BDiSu9YU6SdcwmVbjyweir2LGz0UF9iDIvZm6PBAmprM9JohPOOwsAojDQhLbEBS97bPXXpKu56 XZcm0A+W5XfKA8OOtocoihmr1l/8Yqh1uxo0Y3LQUECs4L90r0ZVnnzcIN3uk90yUdmdkxdO6 X-Google-Smtp-Source: AGRyM1vMbAJ8s+S/tbABFbKabj9wTHVONvg7aMOuknXjfNhi2d/JpNFiqLLN/3A3eiaufTAkbLR2wGQ= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:7ed4:5864:d5e1:ffe1]) (user=pcc job=sendgmr) by 2002:a5b:44d:0:b0:66f:ad5a:9d0b with SMTP id s13-20020a5b044d000000b0066fad5a9d0bmr1185324ybp.79.1658454665263; Thu, 21 Jul 2022 18:51:05 -0700 (PDT) Date: Thu, 21 Jul 2022 18:50:30 -0700 In-Reply-To: <20220722015034.809663-1-pcc@google.com> Message-Id: <20220722015034.809663-5-pcc@google.com> Mime-Version: 1.0 References: <20220722015034.809663-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.359.gd136c6c3e2-goog Subject: [PATCH v2 4/7] arm64: mte: Lock a page for MTE tag initialisation From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220721_185107_200975_E50D248D X-CRM114-Status: GOOD ( 27.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Initialising the tags and setting PG_mte_tagged flag for a page can race between multiple set_pte_at() on shared pages or setting the stage 2 pte via user_mem_abort(). Introduce a new PG_mte_lock flag as PG_arch_3 and set it before attempting page initialisation. Given that PG_mte_tagged is never cleared for a page, consider setting this flag to mean page unlocked and wait on this bit with acquire semantics if the page is locked: - try_page_mte_tagging() - lock the page for tagging, return true if it can be tagged, false if already tagged. No acquire semantics if it returns true (PG_mte_tagged not set) as there is no serialisation with a previous set_page_mte_tagged(). - set_page_mte_tagged() - set PG_mte_tagged with release semantics. The two-bit locking is based on Peter Collingbourne's idea. Signed-off-by: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Steven Price Cc: Peter Collingbourne --- arch/arm64/include/asm/mte.h | 32 ++++++++++++++++++++++++++++++++ arch/arm64/include/asm/pgtable.h | 1 + arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/mte.c | 7 +++++-- arch/arm64/kvm/guest.c | 16 ++++++++++------ arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/copypage.c | 2 ++ arch/arm64/mm/fault.c | 2 ++ arch/arm64/mm/mteswap.c | 3 +++ 9 files changed, 57 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index c69218c56980..8e007046bba6 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -36,6 +36,8 @@ void mte_free_tag_storage(char *storage); /* track which pages have valid allocation tags */ #define PG_mte_tagged PG_arch_2 +/* simple lock to avoid multiple threads tagging the same page */ +#define PG_mte_lock PG_arch_3 static inline void set_page_mte_tagged(struct page *page) { @@ -60,6 +62,32 @@ static inline bool page_mte_tagged(struct page *page) return ret; } +/* + * Lock the page for tagging and return 'true' if the page can be tagged, + * 'false' if already tagged. PG_mte_tagged is never cleared and therefore the + * locking only happens once for page initialisation. + * + * The page MTE lock state: + * + * Locked: PG_mte_lock && !PG_mte_tagged + * Unlocked: !PG_mte_lock || PG_mte_tagged + * + * Acquire semantics only if the page is tagged (returning 'false'). + */ +static inline bool try_page_mte_tagging(struct page *page) +{ + if (!test_and_set_bit(PG_mte_lock, &page->flags)) + return true; + + /* + * The tags are either being initialised or have already been initialised, + * wait for the PG_mte_tagged flag to be set. + */ + smp_cond_load_acquire(&page->flags, VAL & (1UL << PG_mte_tagged)); + + return false; +} + void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t old_pte, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); @@ -84,6 +112,10 @@ static inline bool page_mte_tagged(struct page *page) { return false; } +static inline bool try_page_mte_tagging(struct page *page) +{ + return false; +} static inline void mte_zero_clear_page_tags(void *addr) { } diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 82719fa42c0e..e6b82ad1e9e6 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1049,6 +1049,7 @@ static inline void arch_swap_invalidate_area(int type) #define __HAVE_ARCH_SWAP_RESTORE static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { + /* mte_restore_tags() takes the PG_mte_lock */ if (system_supports_mte() && mte_restore_tags(entry, &folio->page)) set_page_mte_tagged(&folio->page); } diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c66f0ffaaf47..31787dafe95e 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2020,7 +2020,7 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) * Clear the tags in the zero page. This needs to be done via the * linear map which has the Tagged attribute. */ - if (!page_mte_tagged(ZERO_PAGE(0))) { + if (try_page_mte_tagging(ZERO_PAGE(0))) { mte_clear_page_tags(lm_alias(empty_zero_page)); set_page_mte_tagged(ZERO_PAGE(0)); } diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 2287316639f3..634e089b5933 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -41,6 +41,7 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (check_swap && is_swap_pte(old_pte)) { swp_entry_t entry = pte_to_swp_entry(old_pte); + /* mte_restore_tags() takes the PG_mte_lock */ if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) { set_page_mte_tagged(page); return; @@ -50,8 +51,10 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (!pte_is_tagged) return; - mte_clear_page_tags(page_address(page)); - set_page_mte_tagged(page); + if (try_page_mte_tagging(page)) { + mte_clear_page_tags(page_address(page)); + set_page_mte_tagged(page); + } } void mte_sync_tags(pte_t old_pte, pte_t pte) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 3b04e69006b4..059b38e7a9e8 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1067,15 +1067,19 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, clear_user(tags, MTE_GRANULES_PER_PAGE); kvm_release_pfn_clean(pfn); } else { + /* + * Only locking to serialise with a concurrent + * set_pte_at() in the VMM but still overriding the + * tags, hence ignoring the return value. + */ + try_page_mte_tagging(page); num_tags = mte_copy_tags_from_user(maddr, tags, MTE_GRANULES_PER_PAGE); - /* - * Set the flag after checking the write - * completed fully - */ - if (num_tags == MTE_GRANULES_PER_PAGE) - set_page_mte_tagged(page); + /* uaccess failed, don't leave stale tags */ + if (num_tags != MTE_GRANULES_PER_PAGE) + mte_clear_page_tags(page); + set_page_mte_tagged(page); kvm_release_pfn_dirty(pfn); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1a3707aeb41f..750a69a97994 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1066,7 +1066,7 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, return; for (i = 0; i < nr_pages; i++, page++) { - if (!page_mte_tagged(page)) { + if (try_page_mte_tagging(page)) { mte_clear_page_tags(page_address(page)); set_page_mte_tagged(page); } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 4223389b6180..a3fa650ceca4 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -22,6 +22,8 @@ void copy_highpage(struct page *to, struct page *from) copy_page(kto, kfrom); if (system_supports_mte() && page_mte_tagged(from)) { + /* It's a new page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(to)); mte_copy_page_tags(kto, kfrom); set_page_mte_tagged(to); } diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index d095bfa16771..6407a29cab0d 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -930,6 +930,8 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, void tag_clear_highpage(struct page *page) { + /* Newly allocated page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(page)); mte_zero_clear_page_tags(page_address(page)); set_page_mte_tagged(page); } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index a78c1db23c68..cd5ad0936e16 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -53,6 +53,9 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page) if (!tags) return false; + /* racing tag restoring? */ + if (!try_page_mte_tagging(page)) + return false; mte_restore_page_tags(page_address(page), tags); return true; From patchwork Fri Jul 22 01:50:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12925874 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E7815C433EF for ; Fri, 22 Jul 2022 01:52:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=yFCqHmshRGaHg8otRSCmbVDOwLocAc7RtqwSnE3UtbI=; b=shP5mlyGBLrBI4PNCGESXgsubR We0Ijtfc1WPrM28JrzWc9DvS6hGZEwj/6f4CGJP3h5njlJrcVvX9+p8Py/N+IpzCIFnscCwzOfT6d GFTzmi/b3+PAdVUUDZ6aNy2Urs5xnehLt4kQ86HW+LjRspyLIUuAqq4NLOTOygqgHbMQ0TFQaftLh AmKE1o/zvXU9YsCs0meOfD8ProUaj9LnujWKBtdKjxKcEjgK0s4H94Fg3S/Z28zlA4llXNtTZuN9i 4ambzeuIJFzeQ/5VAVtkHIlH37xKYvro3/f4OgFPoUPbktWfQWlsLlavRYa62YwSsejACu1f6l33O uhLLS4Sg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhpf-00Fnb0-6e; Fri, 22 Jul 2022 01:51:59 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhoq-00FlrR-W0 for linux-arm-kernel@lists.infradead.org; Fri, 22 Jul 2022 01:51:10 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id r64-20020a254443000000b006707b7c2baeso2601500yba.16 for ; Thu, 21 Jul 2022 18:51:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GyooXJlVj4syrTh57O3ydwA+ijcUnjC6dc1fY8noLps=; b=YexvIvXmjzIiWZCe0Sk5hw56SQOQK5rCE+GIuFsEHy72Q9kS5JT4M+nBTT7s4wZuAm f5dgcRlU/elvPbUhNMcjoONhJPNZbqP9Gx+yqVuAkYgt2j7WBfrWdp2nvpa9il5ZAGcY uhOTucAWiXLVs4DCEzbfoEsOlh9g8RMGTfBO6KSXT0jKWgr6d5G6OLdAD8JdvWJd/cAT MqdldRcm0mVOGz7Jp90nD+TwbpLxHOWkqwPMedVjuKgHX10c/bwtjdG0JAy5utJ3aJVJ 2732cZqnk/H+d5WXLQ9+UDeSK1u0VXrOzTEJF49vbxTmV8DJAyfAaRzc8qK5XqgH4gyK Xjkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GyooXJlVj4syrTh57O3ydwA+ijcUnjC6dc1fY8noLps=; b=H9R6eiyLrBYOFgdGwXrUJNUOctMuaAcpnIoRlLOzl/hpzotLl3B7z3rwZ3wZHdLuBf n/9SwMI8mnl51HV54dZb62Sb/BJtUIpWxa+n/tfxWObWsrcQ4jXUQNjGcQiUIqq4uJfY /iESY+uyB0RPSX5B+U1tr/0UkMcU0G0kfu8T2C28uWvhTkbVdKPphjDg+XtobYldAcG3 aQmj4Co3G/nzKtVYdfYWuiIsP6shmaRw1XXeW+0Oa4z51/MGpijMD4sIHRLA0XTepf6q +AbqHWpXEcFGy4ewR9CuuJR9GQPs85+M40SnwjUjqlu1XMZGadYazla9IRNesvqw4UY5 j09g== X-Gm-Message-State: AJIora++rEJqHnMWVeyga1S9xpQFBQoQql3AjzfXb/M26jsFGJzLUsf+ BZEuzb3oY7gEu5PD59q4+wPXnifrjJ+9JmmbPLCSZQash60CYHGJgz+bZpHJiCJoaVMQ9g0EeX7 E+vHffFUOp/zZwujrF3JTmQ/Sz3qr/r9XMD5gjdBxZyFJ3sHU2gvimrs/Ew8mz1DY40IeLMlB X-Google-Smtp-Source: AGRyM1us6MLr1PJ/EHpCUXmwTwiBBgpEPArK9WyBoVdM3Qtak3DgE3vimbQIEBEpi09h87XBzW/1l2Y= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:7ed4:5864:d5e1:ffe1]) (user=pcc job=sendgmr) by 2002:a25:8404:0:b0:66e:fe43:4f93 with SMTP id u4-20020a258404000000b0066efe434f93mr1180184ybk.284.1658454667788; Thu, 21 Jul 2022 18:51:07 -0700 (PDT) Date: Thu, 21 Jul 2022 18:50:31 -0700 In-Reply-To: <20220722015034.809663-1-pcc@google.com> Message-Id: <20220722015034.809663-6-pcc@google.com> Mime-Version: 1.0 References: <20220722015034.809663-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.359.gd136c6c3e2-goog Subject: [PATCH v2 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220721_185109_090842_BB6F27F5 X-CRM114-Status: GOOD ( 15.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Previously we allowed creating a memslot containing a private mapping that was not VM_MTE_ALLOWED, but would later reject KVM_RUN with -EFAULT. Now we reject the memory region at memslot creation time. Since this is a minor tweak to the ABI (a VMM that created one of these memslots would fail later anyway), no VMM to my knowledge has MTE support yet, and the hardware with the necessary features is not generally available, we can probably make this ABI change at this point. Signed-off-by: Peter Collingbourne --- arch/arm64/kvm/mmu.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 750a69a97994..d54be80e31dd 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1073,6 +1073,19 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, } } +static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) +{ + /* + * VM_SHARED mappings are not allowed with MTE to avoid races + * when updating the PG_mte_tagged page flag, see + * sanitise_mte_tags for more details. + */ + if (vma->vm_flags & VM_SHARED) + return false; + + return vma->vm_flags & VM_MTE_ALLOWED; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, unsigned long fault_status) @@ -1249,9 +1262,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { - /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if ((vma->vm_flags & VM_MTE_ALLOWED) && - !(vma->vm_flags & VM_SHARED)) { + /* Check the VMM hasn't introduced a new disallowed VMA */ + if (kvm_vma_mte_allowed(vma)) { sanitise_mte_tags(kvm, pfn, vma_pagesize); } else { ret = -EFAULT; @@ -1695,12 +1707,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if (!vma) break; - /* - * VM_SHARED mappings are not allowed with MTE to avoid races - * when updating the PG_mte_tagged page flag, see - * sanitise_mte_tags for more details. - */ - if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) { + if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) { ret = -EINVAL; break; } From patchwork Fri Jul 22 01:50:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12925875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B8EEAC43334 for ; Fri, 22 Jul 2022 01:53:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=JHm3xBpy13pgXqXrPR6x2GmE1FxxWIqgAZUAF9WjBTc=; b=r4Ec/JCd8v85pA/a8bMRujI/ZQ iJYjfGXfDzFITGUI48WyxGd39rydrkmyb7/P7dF594hIKPes6wUrCVKNb2vTAJ4ZP6Mc95vqRCjS3 6j9DoZrnX8oY7HRu7q/rOe36aPJcUB8UVhKlQ93ixEtjo0Elqz2IHecMFa7p/Qc3mqrYnpzZCHB2O 3uJ3ngGLxmN9Mxyz/42nVz2tdAU+0kigeFgCFPZBdvbIIYkILNFeLJ+g/iVzc0YRTffsY1hWL8yVX cVROijG75PYgZsbIrz/6r9ZI3hEkewf8soIoy/l15TpaIfcQbPy5efM5vfG5zdjZhtwzsZhwdZiGP LkgcHfsA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhq3-00FoHm-Ib; Fri, 22 Jul 2022 01:52:23 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhot-00Flx3-MC for linux-arm-kernel@lists.infradead.org; Fri, 22 Jul 2022 01:51:13 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-31e62f7b377so28370807b3.17 for ; Thu, 21 Jul 2022 18:51:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cXHHG5JyJT6/zfSdYwcZ5hplI/Zwtw5pce4ngWT2RP0=; b=B/196tHwJZGxwdFpPDFtGgET20t1oaPOi+B7MforGvjteQqAubrBbnDQDcEDABfdFN 3DK9BLo4bXahlAM1Zsxz6zcxuz6ugGDvcoav9uuevoCJW5/jGeVxKgcMDWkvWNt+YlYE YGooogCFEZU+f/gGBwXExbBhTXJG7xpnwCN4d1QjLWLCX6p/euCGwNgqngATINfEafUZ vmxSUdXRy8210cbejvc/kzSbNqREBTwRlsnUJ3CeOb7E82yXcf7pGch9UX8WepzcpJGw gVDLrfHbXsI5qYh6TeA8MbgK6BZW4I+5FNvZUO3QAWxqrGa639qAV9IpgKC4MjSspDJg W3qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cXHHG5JyJT6/zfSdYwcZ5hplI/Zwtw5pce4ngWT2RP0=; b=hlr2KDQS293AlgqE0ju5KHCHvFcHoQoW9vpXAP9axsRu3M7+8pDXTgg+y9nZ0bTsIc q6cqqlbyRmXOrV28o8cRLwPXL//rNnsslHdI1QFy9+pmfGTFIlEDIzjC4KlBh6H/hjms 8jEAJM5XlnH0eqsl1b2gRo7J0qlY+NHLMcLSEaYNdkN/cWpz+W47zeifknrc8Z08BYJu qzW/DNyAzg1vMcqJwOwKt8Q0thkUdqjRtFIBAd84SUNklP8yH09Xv6+RUoY7rnG8aa1E JHE+V5BMETTfJvi2OF/amHooU/DkWRt0c+5Ucqvol2cBLovVfHAc1twRJJUCNUngstFv zmVg== X-Gm-Message-State: AJIora9bVlLxfhcxizFVREHDJpl5IaprZ9aZtymKZTNdfU1Z89WXaftF ZSJCw49Z/CzLxVE7ZgpaanqIumZzX9eLGDho86l/ap5VZm84NrhiWb8Uoq5ofchG5WAhyCbApDg bbiFqX44Nih5inZ6YnLewrIBmSmWXlePbeAujpQREZ4g/5CqdCOhjDQcp/K4Y+NNNeJvy7gpV X-Google-Smtp-Source: AGRyM1uyeQgFMIiL2ECknr9YM3j6WFlHrejpaFbSL4QcZoqTrnLYKShMPao1gximyv/6/63IGwrd5rM= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:7ed4:5864:d5e1:ffe1]) (user=pcc job=sendgmr) by 2002:a81:124b:0:b0:31c:fc99:d4de with SMTP id 72-20020a81124b000000b0031cfc99d4demr1273232yws.348.1658454670287; Thu, 21 Jul 2022 18:51:10 -0700 (PDT) Date: Thu, 21 Jul 2022 18:50:32 -0700 In-Reply-To: <20220722015034.809663-1-pcc@google.com> Message-Id: <20220722015034.809663-7-pcc@google.com> Mime-Version: 1.0 References: <20220722015034.809663-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.359.gd136c6c3e2-goog Subject: [PATCH v2 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220721_185111_755596_51F8975C X-CRM114-Status: GOOD ( 12.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Certain VMMs such as crosvm have features (e.g. sandboxing) that depend on being able to map guest memory as MAP_SHARED. The current restriction on sharing MAP_SHARED pages with the guest is preventing the use of those features with MTE. Now that the races between tasks concurrently clearing tags on the same page have been fixed, remove this restriction. Signed-off-by: Peter Collingbourne --- arch/arm64/kvm/mmu.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index d54be80e31dd..fc65dc20655d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1075,14 +1075,6 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) { - /* - * VM_SHARED mappings are not allowed with MTE to avoid races - * when updating the PG_mte_tagged page flag, see - * sanitise_mte_tags for more details. - */ - if (vma->vm_flags & VM_SHARED) - return false; - return vma->vm_flags & VM_MTE_ALLOWED; } From patchwork Fri Jul 22 01:50:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12925885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1A976C43334 for ; Fri, 22 Jul 2022 01:53:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=HvfWnHEFdTgaQdh5sH820+Uv51/ANLsb9QqbOfSkDFw=; b=oyK/hO1iK4BHCG/LnMu1pze4R/ MYKReJJ5ZwLll4PU83Hnpw46kU9xeypTykDzNmsrXnUQlKzLJ9FifBCrzoTx4dzInUFVxp7ALXCmf MRIrw0X9rpuOZ/W+2rbes0fm6XL8lcAnMxVw9jRyKrMNizfSdjA8HFvvAx2lN4f1tljZeNSqlOx0/ JPlfLw2e+g/tnoHqVVttZBhVIokqNsrB2uZIJMW9B7+gxCSI6tOHPmsHhXqaxBTEVNo9iMIqJpD9I hR7iENUVj5+9eIIGi41m+8KkMV754FiCEIEmURGpFawt7f5FCcZTjsB2BWCPr+r0stwVkAEpGfm/T 85DFvj9Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhqO-00Fop5-UP; Fri, 22 Jul 2022 01:52:45 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oEhow-00Fm8B-Pw for linux-arm-kernel@lists.infradead.org; Fri, 22 Jul 2022 01:51:16 +0000 Received: by mail-yb1-xb49.google.com with SMTP id f85-20020a253858000000b00670a44473e2so2595044yba.9 for ; Thu, 21 Jul 2022 18:51:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=h8uEQ9pObe4PCZBHvRMWu04/OqsxiUE5izQACc/t1qA=; b=Ukex56V06NQTMwxsfTfflZTIhlPAb6x0ltdKx1zBysm7RfXqd7wFZZQRKyH0VzOFQr Y+L4jMyhTpQ+5w62O8ljEcGyetRTzpXeuJgVDGx43FYmUqMCaazs1z2HBznVt+e4xRTq 8BTFjP0UISkfuw1zjrGHTk4NeN9Gui7Aq8k3JLm7lnIxA8ow7cBGdHuEknVOeG4Zq2jJ J9377HHBqBU/OmUz2MYbyVym3D1J4ix3ONzdHWHv+cgfaKXYyjSk51Fxzzm8xDa3ZIuJ bPVIvVXhJSKu35qiAyGNuqhVJ+94oMqsxGmrvFTR76HbL2jMIv5z7gCGfkLNOmC1jMDe iMpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=h8uEQ9pObe4PCZBHvRMWu04/OqsxiUE5izQACc/t1qA=; b=bfFBCTUIiF/03H4us3O0WU/SISNB00UpZtU/7vwaGYi0efRv8hlhRDD41seKA6RhGy 36xaLCmi6prWq3oUS2h/A8IguXctcNseD9R2NvwmV5QrRAAwJnXSmr/SkFEWqvzO+yz2 fg+apaoA51KyH6Ik3+JNZ6ghfP7SlDy/NmXkNq7F5AY+zjN7opTPhyulXa6C9K8pnPwc UMhrZ+JAuu92L6YCve5pyVJbfRpkmu99jPUQhQ7YiF4HRKqxz5sj73yXJHYp6UhvBmW8 Z4JvvnjK/F/0EoZG9uv33ODi8EjhqEQQ/cZ+j25yQE3JuEjGza35cNq3bTfSvdWmsFo0 46IA== X-Gm-Message-State: AJIora/hyK6p6eRuedOunzEw0tbd0yw5+eEFJSKbJbnFKT99YWw54kX8 vzUDICkjGgZ/CTkVUu6392wb2eKpY6dkdA3vxDtqCF3cQafcHlGcc35nN25ELxk98ye+f7SWdrZ xAByx9dap/fwebBHkpKtTQJnKq6HUl3+hKezWO8RmfB8N9kemI3gyF+QJGqNXcSyUd++HfKwS X-Google-Smtp-Source: AGRyM1vtD7rOxtriQj5Ecvofp6SqMq07c0PbfEhxrYPq2ZOkp7UkUbp02IAcwvO8QMo59HdQ9f5IQn4= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:7ed4:5864:d5e1:ffe1]) (user=pcc job=sendgmr) by 2002:a25:5ec5:0:b0:66f:b76a:c5e9 with SMTP id s188-20020a255ec5000000b0066fb76ac5e9mr1185482ybb.334.1658454672828; Thu, 21 Jul 2022 18:51:12 -0700 (PDT) Date: Thu, 21 Jul 2022 18:50:33 -0700 In-Reply-To: <20220722015034.809663-1-pcc@google.com> Message-Id: <20220722015034.809663-8-pcc@google.com> Mime-Version: 1.0 References: <20220722015034.809663-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.359.gd136c6c3e2-goog Subject: [PATCH v2 7/7] Documentation: document the ABI changes for KVM_CAP_ARM_MTE From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220721_185114_877249_7D3CDFB5 X-CRM114-Status: GOOD ( 12.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Document both the restriction on VM_MTE_ALLOWED mappings and the relaxation for shared mappings. Signed-off-by: Peter Collingbourne --- Documentation/virt/kvm/api.rst | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index ebc5a519574f..5bb74b73bff3 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -7486,8 +7486,9 @@ hibernation of the host; however the VMM needs to manually save/restore the tags as appropriate if the VM is migrated. When this capability is enabled all memory in memslots must be mapped as -not-shareable (no MAP_SHARED), attempts to create a memslot with a -MAP_SHARED mmap will result in an -EINVAL return. +``MAP_ANONYMOUS`` or with a RAM-based file mapping (``tmpfs``, ``memfd``), +attempts to create a memslot with an invalid mmap will result in an +-EINVAL return. When enabled the VMM may make use of the ``KVM_ARM_MTE_COPY_TAGS`` ioctl to perform a bulk copy of tags to/from the guest.