From patchwork Wed Aug 10 19:30:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12940948 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5892EC00140 for ; Wed, 10 Aug 2022 19:32:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=S64DbglDw+jzm9yZqEI4mshVbnw9R0I6DeozqQRYE40=; b=s5zBUVQXhav3ITTNHlyIKBwhIX 30ALQFOv3HFjiNWzyGFCXHJY4nMqgvPs8GvP/YnlJtGCF5ManYUVPqdGthEs72vcayTgEYc/WIev8 BMq4fvQpTfSpj+Y/SqwhPFAIMBJlp/VhyYRdWLvp1BMJK57hD+2teMUJIl6GykD5v0UGV7wJzK9i5 SUNRo9ohiEvolFsaYcecetoLP3bpHirrPz3XQW3OWqAhe9PhyDDWPuyq/L8F/O8xFCcrPKRC2pHt9 ttt0lkx5LJz1QsT5ySsQJUVqziXcB5xnWk9VDXUjaEooT/n6y2ZwC6JMAPTF5zi3vFHofI0jkqvIR B4yRtdgw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrPr-00EBn7-Ay; Wed, 10 Aug 2022 19:30:55 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrPe-00EBdH-Hu for linux-arm-kernel@lists.infradead.org; Wed, 10 Aug 2022 19:30:44 +0000 Received: by mail-yb1-xb49.google.com with SMTP id q10-20020a5b034a000000b0067c21e08c18so4836167ybp.8 for ; Wed, 10 Aug 2022 12:30:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=aUUoki1fBbr1VHnFTWpSuhgXhjWbei8mqVtaFmC+Azk=; b=dFimTrDy0vyfsakhSsM2MgmLihK5ZT3nHXihsh7kRIU8qFUrT42g4CM59yCs3M5ZwP z5ySu9pz5x3icHeyVY+hREJONXJOz+arYRweCUY/QoPxpiSgqZei9ivsLqL+cDsbrV8e Y4RF+70YIRFgmczS29tprfQWIgCWZERcUxG1I/zVb3ixRelYIw988tKwLL/mSJ7G/naQ PTnZRHme3aNBcYKNcnB/OUow8S5TmIMQ0Ruj0hXLN5/97m+4ssL/BGBAVdr2MF7+Pi9a XNQsPxJPTfNRAVGjO7jreiL9lk+isgY5BWCWBc+zCsh4lrWjfLqYy+b0YmzTdst19PkQ DLPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=aUUoki1fBbr1VHnFTWpSuhgXhjWbei8mqVtaFmC+Azk=; b=FHBsz9Kppw+nf+ArRBzEFJBOv0/uiUc7QYskIRD7ciVp5XxI+v+CyipAuuQNlPy7RO 8tX15Asp+38REwjN7VLp9tgu4pmqR1nFDCrXoee+yau7dJ3/E/Z67v63F7jn7iAf1Q0e ajpWPntpscsJ/j+l3kS6Sl7EqXZWApcxtH82/h1l5IJUtGYH7bVoFSIrzOjp8fuNJ8CP RdwyhFyHWW1+SIY6pz7Ex6lV/LRGXJXFKZ550hxGM0Bq8/mOxYc93vGPuBDjby/m08VU D1lTJFn7ZVtvSAu00xj9ZkQHonocONhbFEPRliDC6BLlaKrm+53/nQmbtlDHqznbFuIN 0FFQ== X-Gm-Message-State: ACgBeo1lGi+ZaQyZPAOblpqO3JDQDRk61NqtvZdOjA94WCI0pAAZl8D8 V8SxqJu/vvj8zHWqBWBksdT8DkIEnKoEIg4z0DMGnbJ57nEBisfCkLV8p03Bi2j49gRvqAbSK4H Njf422CfvHjLRIeNW0Bo6WDxf6h/liSXVNJmJziGaxHXBLy1WcdDXS8xkVwzPzMEUPf3lFTbh X-Google-Smtp-Source: AA6agR6Hu40e+rXMKjLTklScKIslWOJJKJW7gzTiec3xOVah8qCZTiwLMQxTJUH4DmwVbqNG6hSdJAs= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:4d8b:fb2a:2ecb:c2bb]) (user=pcc job=sendgmr) by 2002:a05:690c:830:b0:31f:5f93:a63f with SMTP id by16-20020a05690c083000b0031f5f93a63fmr29483781ywb.197.1660159839825; Wed, 10 Aug 2022 12:30:39 -0700 (PDT) Date: Wed, 10 Aug 2022 12:30:27 -0700 In-Reply-To: <20220810193033.1090251-1-pcc@google.com> Message-Id: <20220810193033.1090251-2-pcc@google.com> Mime-Version: 1.0 References: <20220810193033.1090251-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH v3 1/7] arm64: mte: Fix/clarify the PG_mte_tagged semantics From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220810_123042_622100_DD9A1F26 X-CRM114-Status: GOOD ( 31.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Currently the PG_mte_tagged page flag mostly means the page contains valid tags and it should be set after the tags have been cleared or restored. However, in mte_sync_tags() it is set before setting the tags to avoid, in theory, a race with concurrent mprotect(PROT_MTE) for shared pages. However, a concurrent mprotect(PROT_MTE) with a copy on write in another thread can cause the new page to have stale tags. Similarly, tag reading via ptrace() can read stale tags of the PG_mte_tagged flag is set before actually clearing/restoring the tags. Fix the PG_mte_tagged semantics so that it is only set after the tags have been cleared or restored. This is safe for swap restoring into a MAP_SHARED or CoW page since the core code takes the page lock. Add two functions to test and set the PG_mte_tagged flag with acquire and release semantics. The downside is that concurrent mprotect(PROT_MTE) on a MAP_SHARED page may cause tag loss. This is already the case for KVM guests if a VMM changes the page protection while the guest triggers a user_mem_abort(). Signed-off-by: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Steven Price Cc: Peter Collingbourne Reviewed-by: Cornelia Huck Reviewed-by: Steven Price --- v3: - fix build with CONFIG_ARM64_MTE disabled arch/arm64/include/asm/mte.h | 30 ++++++++++++++++++++++++++++++ arch/arm64/include/asm/pgtable.h | 2 +- arch/arm64/kernel/cpufeature.c | 4 +++- arch/arm64/kernel/elfcore.c | 2 +- arch/arm64/kernel/hibernate.c | 2 +- arch/arm64/kernel/mte.c | 12 +++++++----- arch/arm64/kvm/guest.c | 4 ++-- arch/arm64/kvm/mmu.c | 4 ++-- arch/arm64/mm/copypage.c | 4 ++-- arch/arm64/mm/fault.c | 2 +- arch/arm64/mm/mteswap.c | 2 +- 11 files changed, 51 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index aa523591a44e..46618c575eac 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -37,6 +37,29 @@ void mte_free_tag_storage(char *storage); /* track which pages have valid allocation tags */ #define PG_mte_tagged PG_arch_2 +static inline void set_page_mte_tagged(struct page *page) +{ + /* + * Ensure that the tags written prior to this function are visible + * before the page flags update. + */ + smp_wmb(); + set_bit(PG_mte_tagged, &page->flags); +} + +static inline bool page_mte_tagged(struct page *page) +{ + bool ret = test_bit(PG_mte_tagged, &page->flags); + + /* + * If the page is tagged, ensure ordering with a likely subsequent + * read of the tags. + */ + if (ret) + smp_rmb(); + return ret; +} + void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t old_pte, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); @@ -54,6 +77,13 @@ size_t mte_probe_user_range(const char __user *uaddr, size_t size); /* unused if !CONFIG_ARM64_MTE, silence the compiler */ #define PG_mte_tagged 0 +static inline void set_page_mte_tagged(struct page *page) +{ +} +static inline bool page_mte_tagged(struct page *page) +{ + return false; +} static inline void mte_zero_clear_page_tags(void *addr) { } diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b5df82aa99e6..82719fa42c0e 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1050,7 +1050,7 @@ static inline void arch_swap_invalidate_area(int type) static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { if (system_supports_mte() && mte_restore_tags(entry, &folio->page)) - set_bit(PG_mte_tagged, &folio->flags); + set_page_mte_tagged(&folio->page); } #endif /* CONFIG_ARM64_MTE */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 907401e4fffb..562c301bbf15 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2037,8 +2037,10 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) * Clear the tags in the zero page. This needs to be done via the * linear map which has the Tagged attribute. */ - if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags)) + if (!page_mte_tagged(ZERO_PAGE(0))) { mte_clear_page_tags(lm_alias(empty_zero_page)); + set_page_mte_tagged(ZERO_PAGE(0)); + } kasan_init_hw_tags_cpu(); } diff --git a/arch/arm64/kernel/elfcore.c b/arch/arm64/kernel/elfcore.c index 98d67444a5b6..f91bb1572d22 100644 --- a/arch/arm64/kernel/elfcore.c +++ b/arch/arm64/kernel/elfcore.c @@ -47,7 +47,7 @@ static int mte_dump_tag_range(struct coredump_params *cprm, * Pages mapped in user space as !pte_access_permitted() (e.g. * PROT_EXEC only) may not have the PG_mte_tagged flag set. */ - if (!test_bit(PG_mte_tagged, &page->flags)) { + if (!page_mte_tagged(page)) { put_page(page); dump_skip(cprm, MTE_PAGE_TAG_STORAGE); continue; diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index af5df48ba915..788597a6b6a2 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -271,7 +271,7 @@ static int swsusp_mte_save_tags(void) if (!page) continue; - if (!test_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) continue; ret = save_tags(page, pfn); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index b2b730233274..2287316639f3 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -41,14 +41,17 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (check_swap && is_swap_pte(old_pte)) { swp_entry_t entry = pte_to_swp_entry(old_pte); - if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) + if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) { + set_page_mte_tagged(page); return; + } } if (!pte_is_tagged) return; mte_clear_page_tags(page_address(page)); + set_page_mte_tagged(page); } void mte_sync_tags(pte_t old_pte, pte_t pte) @@ -64,7 +67,7 @@ void mte_sync_tags(pte_t old_pte, pte_t pte) /* if PG_mte_tagged is set, tags have already been initialised */ for (i = 0; i < nr_pages; i++, page++) { - if (!test_and_set_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) mte_sync_page_tags(page, old_pte, check_swap, pte_is_tagged); } @@ -91,8 +94,7 @@ int memcmp_pages(struct page *page1, struct page *page2) * pages is tagged, set_pte_at() may zero or change the tags of the * other page via mte_sync_tags(). */ - if (test_bit(PG_mte_tagged, &page1->flags) || - test_bit(PG_mte_tagged, &page2->flags)) + if (page_mte_tagged(page1) || page_mte_tagged(page2)) return addr1 != addr2; return ret; @@ -398,7 +400,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, put_page(page); break; } - WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags)); + WARN_ON_ONCE(!page_mte_tagged(page)); /* limit access to the end of the page */ offset = offset_in_page(addr); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 8c607199cad1..3b04e69006b4 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1058,7 +1058,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, maddr = page_address(page); if (!write) { - if (test_bit(PG_mte_tagged, &page->flags)) + if (page_mte_tagged(page)) num_tags = mte_copy_tags_to_user(tags, maddr, MTE_GRANULES_PER_PAGE); else @@ -1075,7 +1075,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, * completed fully */ if (num_tags == MTE_GRANULES_PER_PAGE) - set_bit(PG_mte_tagged, &page->flags); + set_page_mte_tagged(page); kvm_release_pfn_dirty(pfn); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 87f1cd0df36e..c9012707f69c 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1075,9 +1075,9 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, return -EFAULT; for (i = 0; i < nr_pages; i++, page++) { - if (!test_bit(PG_mte_tagged, &page->flags)) { + if (!page_mte_tagged(page)) { mte_clear_page_tags(page_address(page)); - set_bit(PG_mte_tagged, &page->flags); + set_page_mte_tagged(page); } } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 24913271e898..4223389b6180 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -21,9 +21,9 @@ void copy_highpage(struct page *to, struct page *from) copy_page(kto, kfrom); - if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { - set_bit(PG_mte_tagged, &to->flags); + if (system_supports_mte() && page_mte_tagged(from)) { mte_copy_page_tags(kto, kfrom); + set_page_mte_tagged(to); } } EXPORT_SYMBOL(copy_highpage); diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index c33f1fad2745..d095bfa16771 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -931,5 +931,5 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, void tag_clear_highpage(struct page *page) { mte_zero_clear_page_tags(page_address(page)); - set_bit(PG_mte_tagged, &page->flags); + set_page_mte_tagged(page); } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index 4334dec93bd4..a78c1db23c68 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -24,7 +24,7 @@ int mte_save_tags(struct page *page) { void *tag_storage, *ret; - if (!test_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) return 0; tag_storage = mte_allocate_tag_storage(); From patchwork Wed Aug 10 19:30:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12940950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 070E8C00140 for ; Wed, 10 Aug 2022 19:32:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jTxoNHPaQzIZDiun6vavFyeBF1BIjz2bd6qH403IoOQ=; b=gWWnZ0rPlOT4x53dg12K59biUq nKDeeIAmOtWJQCKgqZfhZ2cmMh+e+urDv1m7zTx36abvpX6G0Z2gWDBWTj6F5Sw1O5G5dQ0ZvEqPu 876Pf3pB0D8v2diOspOY1XUu2z6Ks7ceXyX7IMK+L4QRmAOZXyoNqqY0CQGRx4/3pxo4eGK1Hv/Ik jIbH71vP0U/Uj5vn5JT8AkIgzp9LNRlGH3K0lt6t44i67/AmXixeD5R0sNLw5dXtV2z+nODSmXSWb iHb2Cux7ihNC4X7rZwxk3hCw0NB4iYDfCVVDzfE2YudL5dN7UeOZhIY/uGoZinWJHCIXthjOBLILg Q8vmgIyQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrQ1-00EBt7-UW; Wed, 10 Aug 2022 19:31:06 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrPf-00EBe6-Ns for linux-arm-kernel@lists.infradead.org; Wed, 10 Aug 2022 19:30:46 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 207-20020a2505d8000000b0067709d8d3eeso12886244ybf.18 for ; Wed, 10 Aug 2022 12:30:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=mFqP1F0ShkyQorPrJQoLgbSuKmrSiDmNhHy0lRjOzJE=; b=qMMDDHNedwjA7zqPuw6Nex1YvjI+wnTEu3dTcGVr4YFt2dlvPdqZc4ocMMn9OcaJ1u bNOv6WPuFMx17L605tZK2feM/UniON1PqKa8mmfHK5wP9GDR9nJkh6o7w8T7WQtX6yY5 IieYw2WVZgwKDHZ0RsssIZNyrRKvBXHUll+9i2SAzXbQKQSmrI/eEgIYo2qz1LkXxGxx F8+Ozn862bgg+OhFTcwKqmsEXfksNIxUKTBI8ybmwrYZSuK/mHe5NbkTP2LmxZ4Fx5Ls ZcQs0NK70uyil6rXC++AD3JBymVRgk25Vt2ukme7wKzbtixCF3+h1rQid3l6nyDQPJjt TrTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=mFqP1F0ShkyQorPrJQoLgbSuKmrSiDmNhHy0lRjOzJE=; b=4PbatyGQ0gjQYZYrIuDA2QmDoMh3s2NxSneD+8JMzpoSQizva1e8HdGF9X9SbUWPjt 0N+faDlPb7DUqGs/fTg3/AoAaWgLcM1LrzKp/gtJCMDv26+su4RX1qX3VXPzg8nhaQuk Po5BqgUHshRN0W7M6xpcReaVx7kbEBItxL9cx2347raLTr4Ri1wrrQyYG7M9yru+Ufvh lHbwD8D+1iLRygqUp+4wEIFlcaGmTfCqOgBrmLlyKjeu8x0k5BdCI5UZxmdmDcqX2az8 n5/XYOcdDk98qyK2HGczz9zlQYYgLbmu0gcXPqP1a5+77puyjBOiTRmLIpewu/qbpZqb XD8g== X-Gm-Message-State: ACgBeo3FGl6EVWMcD9xqJMdnP181hBtAcJpyqQZT7nTk4FNwHzlQRUec ajCitpvRBQLAz5AcAmxjVW/+2yUhle/TOMMthWZeECoHG88d7A1k5I+4tIij+6qedg0l/EmOrzq NR0aksb/VvpWtW7I3AjL8nNJRtx4zvBrwouiekVGL0/YNyg+SfIjmedYAyq0j7+Drjs6cOQGy X-Google-Smtp-Source: AA6agR706JjdcLT2M4esNrx3T5Owpv6PWcuQ0vDvB4kKku+q3s3am/yxaVtaQ/ck4iSq1T5v728hqWA= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:4d8b:fb2a:2ecb:c2bb]) (user=pcc job=sendgmr) by 2002:a25:e542:0:b0:671:7f71:6895 with SMTP id c63-20020a25e542000000b006717f716895mr26223708ybh.7.1660159842232; Wed, 10 Aug 2022 12:30:42 -0700 (PDT) Date: Wed, 10 Aug 2022 12:30:28 -0700 In-Reply-To: <20220810193033.1090251-1-pcc@google.com> Message-Id: <20220810193033.1090251-3-pcc@google.com> Mime-Version: 1.0 References: <20220810193033.1090251-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH v3 2/7] KVM: arm64: Simplify the sanitise_mte_tags() logic From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220810_123043_813474_090F8380 X-CRM114-Status: GOOD ( 22.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Currently sanitise_mte_tags() checks if it's an online page before attempting to sanitise the tags. Such detection should be done in the caller via the VM_MTE_ALLOWED vma flag. Since kvm_set_spte_gfn() does not have the vma, leave the page unmapped if not already tagged. Tag initialisation will be done on a subsequent access fault in user_mem_abort(). Signed-off-by: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Steven Price Cc: Peter Collingbourne Reviewed-by: Steven Price --- arch/arm64/kvm/mmu.c | 40 +++++++++++++++------------------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9012707f69c..1a3707aeb41f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1056,23 +1056,14 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) * - mmap_lock protects between a VM faulting a page in and the VMM performing * an mprotect() to add VM_MTE */ -static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, - unsigned long size) +static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, + unsigned long size) { unsigned long i, nr_pages = size >> PAGE_SHIFT; - struct page *page; + struct page *page = pfn_to_page(pfn); if (!kvm_has_mte(kvm)) - return 0; - - /* - * pfn_to_online_page() is used to reject ZONE_DEVICE pages - * that may not support tags. - */ - page = pfn_to_online_page(pfn); - - if (!page) - return -EFAULT; + return; for (i = 0; i < nr_pages; i++, page++) { if (!page_mte_tagged(page)) { @@ -1080,8 +1071,6 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, set_page_mte_tagged(page); } } - - return 0; } static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, @@ -1092,7 +1081,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool write_fault, writable, force_pte = false; bool exec_fault; bool device = false; - bool shared; unsigned long mmu_seq; struct kvm *kvm = vcpu->kvm; struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; @@ -1142,8 +1130,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, vma_shift = get_vma_page_shift(vma, hva); } - shared = (vma->vm_flags & VM_SHARED); - switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: @@ -1264,12 +1250,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if (!shared) - ret = sanitise_mte_tags(kvm, pfn, vma_pagesize); - else + if ((vma->vm_flags & VM_MTE_ALLOWED) && + !(vma->vm_flags & VM_SHARED)) { + sanitise_mte_tags(kvm, pfn, vma_pagesize); + } else { ret = -EFAULT; - if (ret) goto out_unlock; + } } if (writable) @@ -1491,15 +1478,18 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { kvm_pfn_t pfn = pte_pfn(range->pte); - int ret; if (!kvm->arch.mmu.pgt) return false; WARN_ON(range->end - range->start != 1); - ret = sanitise_mte_tags(kvm, pfn, PAGE_SIZE); - if (ret) + /* + * If the page isn't tagged, defer to user_mem_abort() for sanitising + * the MTE tags. The S2 pte should have been unmapped by + * mmu_notifier_invalidate_range_end(). + */ + if (kvm_has_mte(kvm) && !page_mte_tagged(pfn_to_page(pfn))) return false; /* From patchwork Wed Aug 10 19:30:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12940951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C0E73C00140 for ; Wed, 10 Aug 2022 19:32:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=t0BDGFTpCqZACFFStaBMOElfZsYqz+EiYI5A13JpG3s=; b=IGyWvDR1I2BS+nydOcXeB10wiV 2xT7oQ2ZxMHHrZj+f6J2clKo8cTfYbwjScUqIdxRMLEjl+RDVYcWjZ3KUBh/Gic/iklIRnrvBQrDg uILBIwr7D9Bod7znfsMOSpWMlbBXlNP4FKRWg4AVXA205LwIGA9laNttsswVOrLu7BIWiOTJF59tL DDtKGzQhfD6gfmdCYUqJrgR2Cur4qbhytkYUYq0MEwP7ME8d2IQEfNhgIbl1ucMiQhg3p3OKGXCre b+iTp1hzaJGcChQz2O3heIU5zqvIZAv9TO7HWgmGkY4i7R/q52kH+1URxcxpOIcY1ZtYi76IxdSHQ 9c0hZmgA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrQD-00EC0c-OG; Wed, 10 Aug 2022 19:31:17 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrPi-00EBgK-7k for linux-arm-kernel@lists.infradead.org; Wed, 10 Aug 2022 19:30:48 +0000 Received: by mail-yb1-xb49.google.com with SMTP id x19-20020a25e013000000b0067c0cedc96bso6120199ybg.21 for ; Wed, 10 Aug 2022 12:30:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=Kifgnty8wg8EK2E1LOfDfA0Z3FKljaVXfcfUN76Fd74=; b=AEHXNbwIZkZmVSCglicIZVkv9ps8jS4TWInv+dVN6TvLyORjLLwspMXTxneW8qNDkP pAvdD4cPTox2bYSpmt3hvFRKQWu3Oe6oqizbPO1YzT3OTytXBJznpfosafaQfsxFWzCP R3VelrImNZlAzUBz6UYiVXt5vclJnonNTMm+u7KzDpUNo+mItmtuI7f3+WiR9V8D1YIW oWank4bbjF/xuhdGeroyngL5kZXu8anYXjXXIwA4o29qD0xbr/Xn3FmWsor98lllNBgp NPUyaSKF6Y+U0lldeeYKnX7kHBt72E7mKKaKgsVVZV4129FiPlUPaDN5p0JT75uhKmwX J7YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=Kifgnty8wg8EK2E1LOfDfA0Z3FKljaVXfcfUN76Fd74=; b=dy+58GvK+HW7fQjCkIJtZUcceVZjDooBHkTQSI5hTWVvXngjHwHWYwHBKY55tBuwtn sF56GMxli3yGOIz96216fTQpH3o6MI0ws/+r4o6Rmv4DSrQfmU5kjhcbWLv2ArWh1YBW wl53x7/76UmKwLfXIpKzKYT0iSBTqcdDP03RD2TCuPcO0x+dy1iOJBo1NHJp5WRHBC4q NJV6dFfZ7aphrCmXnsK3UxJ9tQbj+hAjDAOXWWRrFF039NNMO55Em6k5aBZeL8DS0IBh kcEfq58HG4eTWrHPZ6XTagDn0QNIxX6rVyYmkQ4QbDZsrkh8WH9Ogv4e/gUqVnKP3PcL 9F6g== X-Gm-Message-State: ACgBeo3LzqHWtFxHL8fG2mcdfZr7HeauFbrV/2yF4RlIl0eSXn+b4k3L UA9YH1yadmhqv3kcrl/wN2+d7C0bTJvNAzPLEcD1jAYuV6zMAUxqmZbyRStD8tQmW6OJUZ6pDpr D3ikpGwZ1SAjiwqlVoEoMcxaalfLtDSELbLcW/QApWAzuprsZT0hcwDzonmpZgBN6E8FsXOKx X-Google-Smtp-Source: AA6agR5Kth9/FBYXU8AQovDLBQ+DreNq4HSnqEXzSbwbLJ9mXTKOmVoF4L3eTu2vV2OQ4bzGuxXgOBA= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:4d8b:fb2a:2ecb:c2bb]) (user=pcc job=sendgmr) by 2002:a0d:ed83:0:b0:31d:3928:31b6 with SMTP id w125-20020a0ded83000000b0031d392831b6mr30111017ywe.98.1660159844817; Wed, 10 Aug 2022 12:30:44 -0700 (PDT) Date: Wed, 10 Aug 2022 12:30:29 -0700 In-Reply-To: <20220810193033.1090251-1-pcc@google.com> Message-Id: <20220810193033.1090251-4-pcc@google.com> Mime-Version: 1.0 References: <20220810193033.1090251-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH v3 3/7] mm: Add PG_arch_3 page flag From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220810_123046_317885_D5C78B3A X-CRM114-Status: GOOD ( 13.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org As with PG_arch_2, this flag is only allowed on 64-bit architectures due to the shortage of bits available. It will be used by the arm64 MTE code in subsequent patches. Signed-off-by: Peter Collingbourne Cc: Will Deacon Cc: Marc Zyngier Cc: Steven Price [catalin.marinas@arm.com: added flag preserving in __split_huge_page_tail()] Signed-off-by: Catalin Marinas Reported-by: kernel test robot --- v3: - fix page flag dumping fs/proc/page.c | 1 + include/linux/kernel-page-flags.h | 1 + include/linux/page-flags.h | 1 + include/trace/events/mmflags.h | 7 ++++--- mm/huge_memory.c | 1 + tools/vm/page-types.c | 2 ++ 6 files changed, 10 insertions(+), 3 deletions(-) diff --git a/fs/proc/page.c b/fs/proc/page.c index a2873a617ae8..0129aa3cfb7a 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -220,6 +220,7 @@ u64 stable_page_flags(struct page *page) u |= kpf_copy_bit(k, KPF_ARCH, PG_arch_1); #ifdef CONFIG_64BIT u |= kpf_copy_bit(k, KPF_ARCH_2, PG_arch_2); + u |= kpf_copy_bit(k, KPF_ARCH_3, PG_arch_3); #endif return u; diff --git a/include/linux/kernel-page-flags.h b/include/linux/kernel-page-flags.h index eee1877a354e..859f4b0c1b2b 100644 --- a/include/linux/kernel-page-flags.h +++ b/include/linux/kernel-page-flags.h @@ -18,5 +18,6 @@ #define KPF_UNCACHED 39 #define KPF_SOFTDIRTY 40 #define KPF_ARCH_2 41 +#define KPF_ARCH_3 42 #endif /* LINUX_KERNEL_PAGE_FLAGS_H */ diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 465ff35a8c00..ad01a3abf6c8 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -134,6 +134,7 @@ enum pageflags { #endif #ifdef CONFIG_64BIT PG_arch_2, + PG_arch_3, #endif #ifdef CONFIG_KASAN_HW_TAGS PG_skip_kasan_poison, diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 11524cda4a95..704380179986 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -91,9 +91,9 @@ #endif #ifdef CONFIG_64BIT -#define IF_HAVE_PG_ARCH_2(flag,string) ,{1UL << flag, string} +#define IF_HAVE_PG_ARCH_2_3(flag,string) ,{1UL << flag, string} #else -#define IF_HAVE_PG_ARCH_2(flag,string) +#define IF_HAVE_PG_ARCH_2_3(flag,string) #endif #ifdef CONFIG_KASAN_HW_TAGS @@ -129,7 +129,8 @@ IF_HAVE_PG_UNCACHED(PG_uncached, "uncached" ) \ IF_HAVE_PG_HWPOISON(PG_hwpoison, "hwpoison" ) \ IF_HAVE_PG_IDLE(PG_young, "young" ) \ IF_HAVE_PG_IDLE(PG_idle, "idle" ) \ -IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) \ +IF_HAVE_PG_ARCH_2_3(PG_arch_2, "arch_2" ) \ +IF_HAVE_PG_ARCH_2_3(PG_arch_3, "arch_3" ) \ IF_HAVE_PG_SKIP_KASAN_POISON(PG_skip_kasan_poison, "skip_kasan_poison") #define show_page_flags(flags) \ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0611b2fd145a..262e9ca627fb 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2399,6 +2399,7 @@ static void __split_huge_page_tail(struct page *head, int tail, (1L << PG_unevictable) | #ifdef CONFIG_64BIT (1L << PG_arch_2) | + (1L << PG_arch_3) | #endif (1L << PG_dirty))); diff --git a/tools/vm/page-types.c b/tools/vm/page-types.c index 381dcc00cb62..364373f5bba0 100644 --- a/tools/vm/page-types.c +++ b/tools/vm/page-types.c @@ -79,6 +79,7 @@ #define KPF_UNCACHED 39 #define KPF_SOFTDIRTY 40 #define KPF_ARCH_2 41 +#define KPF_ARCH_3 42 /* [47-] take some arbitrary free slots for expanding overloaded flags * not part of kernel API @@ -138,6 +139,7 @@ static const char * const page_flag_names[] = { [KPF_UNCACHED] = "c:uncached", [KPF_SOFTDIRTY] = "f:softdirty", [KPF_ARCH_2] = "H:arch_2", + [KPF_ARCH_3] = "H:arch_3", [KPF_ANON_EXCLUSIVE] = "d:anon_exclusive", [KPF_READAHEAD] = "I:readahead", From patchwork Wed Aug 10 19:30:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12940952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ECCB6C00140 for ; Wed, 10 Aug 2022 19:32:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=tIrl+Kgis4+Me38qQNug6tL1CKeC7R/WjNT4XSZdo2w=; b=KKgeU4VATdkrTq0ukkrOj45DOV qUKv1w+s7cGKsAy0XTWX6nt33WtdieszqhtXoCY1do3wR/x8HvXa4uck6KSeZNmLabeCTaHjtHV/E KLBF7WokWwXN2Vs0XIJn99fa0P0bKTVHJ8cT2RYjQ3FDnB4m1X/copWYpW7ajpOR6HUsJh8jd/mpb yL/iInpuCMORRKuXdS0EM62Ey6FdBVuotDIFSz6arHpCzPkY8TnuqLpcw+K5hNrmHntGsGzDQsqwu SczbF/pxDx0ufh9wPpHfIhGWUxb4eeyRPC8/7c97c22y1iOuGBJzHJeTTMiqaJvvBOfAnKzrWa5g+ s+OndKkQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrQU-00ECCF-Vh; Wed, 10 Aug 2022 19:31:35 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrPl-00EBjY-0m for linux-arm-kernel@lists.infradead.org; Wed, 10 Aug 2022 19:30:50 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-32a8e40e2dcso17032727b3.23 for ; Wed, 10 Aug 2022 12:30:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=6n+rX73meZMslL3Kuxbxfk8QMa8K/bIVcCBDjIXtjis=; b=d60USk4Uafg5q2XUc05A0w5Si82C9iWWf0rbSj/dwO10J97mWIZ7fxBGUlBBTjDosN jRxICbSe4MkW63C5vebVhDqHLjUtxB3bz9PbP7VQYq1oVpKhn9qyfDimeeDSQw3q0464 av8M+9Unmnnaj+vsEac9mc1VSIL5NygzxyJWnih5AlAg5UAUG7udwgff3R2QUyw5Hr9q HM5tdoBjAzxlGAOkfMaUeEV9XLOEhFULfzrHxAZX2PSCbjaQ7TWOgs79j4c1wAhWZLpW sutaz7V5ZDLNcV4YZxsKgtgm5oje8NMZh/N9MMRZA6zIBJaTuuAwg+Mk8Q/J/EEx4Hi5 ES9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=6n+rX73meZMslL3Kuxbxfk8QMa8K/bIVcCBDjIXtjis=; b=QUsPxN33n/0Z/6y/g+c9VSnL16W1B2Z2QH4CrrwqgdaOQjSZur26H4hun8EJOZRZaw 2b1oVBg7aU/y6TiZA7pGk8AfiPj/GxbZEYw5wKx+3pE7AKN2dYZ2vodun7U7F/QZ6VnL B3hwl5OQEebClINkGAxlFqBuNqTDufD6CDOaOFjmGenN9DEiIZKrghLEQZ5/lYSQ6LHM +deGgloBk8FdzUQEWLqVITk1K3wEnM6OF650VeySiTOBI2qlDUkr3aTwIxnIoiVJe8PF CC7tGAmsDlcHLIed0ewQ9p0ApTLvnKbr9JlzTlLOxtN2+DoP5VmWWtwE3pXSK89COZdA iWJw== X-Gm-Message-State: ACgBeo0aVheRyMjpHFNB+lOwA47gwheTj+6Oj7vqV4syr5QVemWzBNRb AMg417haL3GX8sfHD9i84kMzTtbtWk6WSjo+ZV1zZDYi8U1cEckacnq+Nc1yB+wN7MhJKI4ackz 6cz8Ge2cJ2KvlCrIt3E9e7ww3SHnh321jyl7E0+9joUTGXNrxPw9DYXAUUn3gHxCa5Th+dg+Q X-Google-Smtp-Source: AA6agR7lPd3Ai394HX5crkJQ9TDI4LWx2vRyISro6R5aGNhTnV6SVNuA8Af133JQBCtUmxaOuJhK548= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:4d8b:fb2a:2ecb:c2bb]) (user=pcc job=sendgmr) by 2002:a5b:cc6:0:b0:66e:45c6:2a25 with SMTP id e6-20020a5b0cc6000000b0066e45c62a25mr26481679ybr.304.1660159847472; Wed, 10 Aug 2022 12:30:47 -0700 (PDT) Date: Wed, 10 Aug 2022 12:30:30 -0700 In-Reply-To: <20220810193033.1090251-1-pcc@google.com> Message-Id: <20220810193033.1090251-5-pcc@google.com> Mime-Version: 1.0 References: <20220810193033.1090251-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH v3 4/7] arm64: mte: Lock a page for MTE tag initialisation From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220810_123049_104391_17188B55 X-CRM114-Status: GOOD ( 27.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Initialising the tags and setting PG_mte_tagged flag for a page can race between multiple set_pte_at() on shared pages or setting the stage 2 pte via user_mem_abort(). Introduce a new PG_mte_lock flag as PG_arch_3 and set it before attempting page initialisation. Given that PG_mte_tagged is never cleared for a page, consider setting this flag to mean page unlocked and wait on this bit with acquire semantics if the page is locked: - try_page_mte_tagging() - lock the page for tagging, return true if it can be tagged, false if already tagged. No acquire semantics if it returns true (PG_mte_tagged not set) as there is no serialisation with a previous set_page_mte_tagged(). - set_page_mte_tagged() - set PG_mte_tagged with release semantics. The two-bit locking is based on Peter Collingbourne's idea. Signed-off-by: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Steven Price Cc: Peter Collingbourne Reviewed-by: Steven Price --- arch/arm64/include/asm/mte.h | 32 ++++++++++++++++++++++++++++++++ arch/arm64/include/asm/pgtable.h | 1 + arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/mte.c | 7 +++++-- arch/arm64/kvm/guest.c | 16 ++++++++++------ arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/copypage.c | 2 ++ arch/arm64/mm/fault.c | 2 ++ arch/arm64/mm/mteswap.c | 3 +++ 9 files changed, 57 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 46618c575eac..ea5158f6f6cb 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -36,6 +36,8 @@ void mte_free_tag_storage(char *storage); /* track which pages have valid allocation tags */ #define PG_mte_tagged PG_arch_2 +/* simple lock to avoid multiple threads tagging the same page */ +#define PG_mte_lock PG_arch_3 static inline void set_page_mte_tagged(struct page *page) { @@ -60,6 +62,32 @@ static inline bool page_mte_tagged(struct page *page) return ret; } +/* + * Lock the page for tagging and return 'true' if the page can be tagged, + * 'false' if already tagged. PG_mte_tagged is never cleared and therefore the + * locking only happens once for page initialisation. + * + * The page MTE lock state: + * + * Locked: PG_mte_lock && !PG_mte_tagged + * Unlocked: !PG_mte_lock || PG_mte_tagged + * + * Acquire semantics only if the page is tagged (returning 'false'). + */ +static inline bool try_page_mte_tagging(struct page *page) +{ + if (!test_and_set_bit(PG_mte_lock, &page->flags)) + return true; + + /* + * The tags are either being initialised or have already been initialised, + * wait for the PG_mte_tagged flag to be set. + */ + smp_cond_load_acquire(&page->flags, VAL & (1UL << PG_mte_tagged)); + + return false; +} + void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t old_pte, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); @@ -84,6 +112,10 @@ static inline bool page_mte_tagged(struct page *page) { return false; } +static inline bool try_page_mte_tagging(struct page *page) +{ + return false; +} static inline void mte_zero_clear_page_tags(void *addr) { } diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 82719fa42c0e..e6b82ad1e9e6 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1049,6 +1049,7 @@ static inline void arch_swap_invalidate_area(int type) #define __HAVE_ARCH_SWAP_RESTORE static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { + /* mte_restore_tags() takes the PG_mte_lock */ if (system_supports_mte() && mte_restore_tags(entry, &folio->page)) set_page_mte_tagged(&folio->page); } diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 562c301bbf15..33d342ddef87 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2037,7 +2037,7 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) * Clear the tags in the zero page. This needs to be done via the * linear map which has the Tagged attribute. */ - if (!page_mte_tagged(ZERO_PAGE(0))) { + if (try_page_mte_tagging(ZERO_PAGE(0))) { mte_clear_page_tags(lm_alias(empty_zero_page)); set_page_mte_tagged(ZERO_PAGE(0)); } diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 2287316639f3..634e089b5933 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -41,6 +41,7 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (check_swap && is_swap_pte(old_pte)) { swp_entry_t entry = pte_to_swp_entry(old_pte); + /* mte_restore_tags() takes the PG_mte_lock */ if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) { set_page_mte_tagged(page); return; @@ -50,8 +51,10 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (!pte_is_tagged) return; - mte_clear_page_tags(page_address(page)); - set_page_mte_tagged(page); + if (try_page_mte_tagging(page)) { + mte_clear_page_tags(page_address(page)); + set_page_mte_tagged(page); + } } void mte_sync_tags(pte_t old_pte, pte_t pte) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 3b04e69006b4..059b38e7a9e8 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1067,15 +1067,19 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, clear_user(tags, MTE_GRANULES_PER_PAGE); kvm_release_pfn_clean(pfn); } else { + /* + * Only locking to serialise with a concurrent + * set_pte_at() in the VMM but still overriding the + * tags, hence ignoring the return value. + */ + try_page_mte_tagging(page); num_tags = mte_copy_tags_from_user(maddr, tags, MTE_GRANULES_PER_PAGE); - /* - * Set the flag after checking the write - * completed fully - */ - if (num_tags == MTE_GRANULES_PER_PAGE) - set_page_mte_tagged(page); + /* uaccess failed, don't leave stale tags */ + if (num_tags != MTE_GRANULES_PER_PAGE) + mte_clear_page_tags(page); + set_page_mte_tagged(page); kvm_release_pfn_dirty(pfn); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1a3707aeb41f..750a69a97994 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1066,7 +1066,7 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, return; for (i = 0; i < nr_pages; i++, page++) { - if (!page_mte_tagged(page)) { + if (try_page_mte_tagging(page)) { mte_clear_page_tags(page_address(page)); set_page_mte_tagged(page); } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 4223389b6180..a3fa650ceca4 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -22,6 +22,8 @@ void copy_highpage(struct page *to, struct page *from) copy_page(kto, kfrom); if (system_supports_mte() && page_mte_tagged(from)) { + /* It's a new page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(to)); mte_copy_page_tags(kto, kfrom); set_page_mte_tagged(to); } diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index d095bfa16771..6407a29cab0d 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -930,6 +930,8 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, void tag_clear_highpage(struct page *page) { + /* Newly allocated page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(page)); mte_zero_clear_page_tags(page_address(page)); set_page_mte_tagged(page); } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index a78c1db23c68..cd5ad0936e16 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -53,6 +53,9 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page) if (!tags) return false; + /* racing tag restoring? */ + if (!try_page_mte_tagging(page)) + return false; mte_restore_page_tags(page_address(page), tags); return true; From patchwork Wed Aug 10 19:30:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12940953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C9F50C00140 for ; Wed, 10 Aug 2022 19:33:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Z0goaqYiADXUP4uUseOIu6PIDJvR4MPtUbIaLHrUNY4=; b=0gQF8aX1n0ml1CnuqxGjXfp++8 sbf7BfmCwf2TrzKEuiaaNKXrZe46AWr69EkvNG2uhesBaAokucgR8vGnWy7Py34WYc2EyuuWJEBoY b1YYK74daB6lKDXXVN6xpvWaVIR1zg4wUD8HqoMSRdIhH9Zg0akC0rT4n6wVAuDAGpG9PbfCsitFd TwGcU0bIovi1gF+i3ESeCvEAKKl3n7RyWRwOk1Jr2x51JoJqfYSdb5GrXe2UxeXsmlYe3UZnqZ2Ge beKCvsesgRcPFi7h2m/MlZHeMov2c1zLla1nBV+xF2bNAHyvSX5RvvfU7ZEdeRyRsXtwp9rkDyv9L Hf7uZUqA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrQk-00ECLH-J8; Wed, 10 Aug 2022 19:31:50 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrPp-00EBlG-2a for linux-arm-kernel@lists.infradead.org; Wed, 10 Aug 2022 19:30:54 +0000 Received: by mail-pj1-x1049.google.com with SMTP id kb4-20020a17090ae7c400b001f783657496so1594260pjb.6 for ; Wed, 10 Aug 2022 12:30:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=ISyf//eKqE0iV9Jad3fzzOnB3Q21JHgqLZOFoVPjKMI=; b=Rpt4p2M0H/10ExEF5hENxTIqXeEe2mpTkCUkrm5mtB5m2DgJTHM9DgTYaYdVWE1t70 Ibz1PyMQMU+JQzBpHh8y77NSSLzNoDgdyNwu4bKW/LeONh1O27Fnef3UG+NqKSxvPOuI 7K5fxKaOkDXvsux3jkaLsMXNVuVmWuNRkWnfsR63ijxZRE5cHQ/nyYoJ1XnGvlI58xWk kt3Niw42xbj0USCsk5HkRyo8Ah6LIVH1L+LqjR6TfQ2sF15B6Vu3JZpCDOCHmf3+oBSB JHz7f6lZgjxsPK+fZKVWj/4jKrUcP23UzygWOTRSqFOKGgDwvFbc7eOYYLCGaghgxPf+ gcsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=ISyf//eKqE0iV9Jad3fzzOnB3Q21JHgqLZOFoVPjKMI=; b=P5DL3tZTZhd0B8PP6BxMAW8Lgfy5Xf6AMeSxGlT4bxeD7cyHrrKIkqqhoS1h7lSjMw kMg1RxGbxqTjt4ctBXNCeyG9k+qVsST99Yk1k6i6P8VEUBG+FOWrvvRRTkpfy13aIamg fhupUi0jjSTZjyaKkOV4ryWkHIemYapfOZLvy0T6yVQwOTVUNI5PaH7jB8+3QZR474Ze b3YZ5YaUQuFOd0YUjp7cwWyMPw1ixJhPwChWNM3e2VYw5v67DogZJ33lF06+d9Ejx3ja CLlyAUcTHHTsXPhn8eWhPGdzKwS43nC95evZqCQPoQ7kKgcYpjliq7efrahD4eZ8Kp5g PkUg== X-Gm-Message-State: ACgBeo2xRM+YyyKW34q6Egeqw2BFIfjd/SNavOc1GwBmjRyC7yYzw1jN x/c5jYcxzssq7q7K7vpRm26rPUno5A580/BFHY9e+tvltdmpS2L2Oy6dABz5nclb0qE9ouFvoom putP1bVaJLB2XLYT5qqWeczhsQw0y0Xl9tM18/Bgv9kqaOGGFq3psn9Yv21RQT7ivYhVxD4qB X-Google-Smtp-Source: AA6agR4gQa4UYbSdbQyWQV6Bwq7wnb+1/ubEBVorjvVta/rIBUuFcRv4Qi6HBrOfdJr722FFyo+zG5I= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:4d8b:fb2a:2ecb:c2bb]) (user=pcc job=sendgmr) by 2002:a17:902:d54c:b0:170:9ba1:f38f with SMTP id z12-20020a170902d54c00b001709ba1f38fmr17494564plf.32.1660159850191; Wed, 10 Aug 2022 12:30:50 -0700 (PDT) Date: Wed, 10 Aug 2022 12:30:31 -0700 In-Reply-To: <20220810193033.1090251-1-pcc@google.com> Message-Id: <20220810193033.1090251-6-pcc@google.com> Mime-Version: 1.0 References: <20220810193033.1090251-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH v3 5/7] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220810_123053_157737_7DB9F7C4 X-CRM114-Status: GOOD ( 16.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Previously we allowed creating a memslot containing a private mapping that was not VM_MTE_ALLOWED, but would later reject KVM_RUN with -EFAULT. Now we reject the memory region at memslot creation time. Since this is a minor tweak to the ABI (a VMM that created one of these memslots would fail later anyway), no VMM to my knowledge has MTE support yet, and the hardware with the necessary features is not generally available, we can probably make this ABI change at this point. Signed-off-by: Peter Collingbourne Reviewed-by: Catalin Marinas Reviewed-by: Steven Price --- arch/arm64/kvm/mmu.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 750a69a97994..d54be80e31dd 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1073,6 +1073,19 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, } } +static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) +{ + /* + * VM_SHARED mappings are not allowed with MTE to avoid races + * when updating the PG_mte_tagged page flag, see + * sanitise_mte_tags for more details. + */ + if (vma->vm_flags & VM_SHARED) + return false; + + return vma->vm_flags & VM_MTE_ALLOWED; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, unsigned long fault_status) @@ -1249,9 +1262,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { - /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if ((vma->vm_flags & VM_MTE_ALLOWED) && - !(vma->vm_flags & VM_SHARED)) { + /* Check the VMM hasn't introduced a new disallowed VMA */ + if (kvm_vma_mte_allowed(vma)) { sanitise_mte_tags(kvm, pfn, vma_pagesize); } else { ret = -EFAULT; @@ -1695,12 +1707,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if (!vma) break; - /* - * VM_SHARED mappings are not allowed with MTE to avoid races - * when updating the PG_mte_tagged page flag, see - * sanitise_mte_tags for more details. - */ - if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) { + if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) { ret = -EINVAL; break; } From patchwork Wed Aug 10 19:30:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12940954 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C9497C00140 for ; Wed, 10 Aug 2022 19:33:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ghm+cfQTqQ8LxQqQikVLGedsEQpW3988ywseZbsL494=; b=zqzs8YDUn7Xo82Hnn1pBKv0ufb /TNl/WTFWqLdf83hDkRKIoxbNADNfQF0ETnpFYffAbSQemwsgvVT1iUcyvzlKsATCRft86Hg6XZLr w5MQR0cBz8YvAT4xQYvDFa3B+/WinEEDS+6/m+XmBQJsIhKVv17MdzrFSeibZkwRVrVNxCNLjypri acdto2UpeEhv8JN3er0FgxqaC7aRLUEdi1QzQz4pOWXd9SKLhRRmcXb5gnPgLBUZjMmJQb1rAP7Qs FfCH8/8g2lvbLUr6lg/Ko8U9ab1uFsUJqxnoYslynag5ISG8QvaJL+alrWDWclTSn0N2NLK81R6Cl bnaxO/rg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrRL-00ECk8-FU; Wed, 10 Aug 2022 19:32:28 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrPq-00EBmM-90 for linux-arm-kernel@lists.infradead.org; Wed, 10 Aug 2022 19:30:55 +0000 Received: by mail-yb1-xb49.google.com with SMTP id bt7-20020a056902136700b006777a976adfso12989437ybb.20 for ; Wed, 10 Aug 2022 12:30:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=+yNPE0YtwokNPZdowqkCFFxMTcHqUO1hXxky5MuupRY=; b=oZ5EETOvVLFNFJcFvJs5CUWFZV2YZx1lZtCVYg3Xpam3/6hJePg2ArdHUJxsspg7R5 kYnk9btdizwNCjiWTgj6vhaG+Ln/haQoW/pUsdjAvXGRUtwyTKhYaNbVGIqa4N0knmX5 Y6Bf8Sp0myosiRXqgzgjzzrxNgMTvUamymRKjYhHFjm5HuDY7N2HLKT82K2ZR3tI6a2I TkfL2xlNiTx9SHZbWxe0/jGZ8EVR0bP+e7g+tySRt7UNKn82GSElS85ZLEZQitwkkwIT z43wgykSu9Orgytw+Yk5q/JCadYNjdaFTKqRIDXR1Haw0of7kw8Io57Gij0cmraUd7F0 vpZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=+yNPE0YtwokNPZdowqkCFFxMTcHqUO1hXxky5MuupRY=; b=ac7FZYA0331Bl3qRp5DiQUY+GNsZecqJGLFKr6lgD7OjI/A3UYSbjZ+52tQa6cP1w2 FCS4Tt+0GOnIE3YvJsGkPrM7mUPKPrnUqEOjCI6DhXxMvA+xcEPy2Lx/a5bVIj4BWhgy NATdVrs0MVAcr0Yx6Pom7Gwaw5Zej6RiMYy7C+ZYd78DwoJMbT2Twxw26DQRuSXvA1+i jbUrEQRdHNRdphErsQwBkTIWAxhXs2kCGNo5qZu5XZPbfw/XyFvYjvgNmeuJm6EIIuNu XHl/7lZ2iLRTA4SVZtuw+EKbdNbA6X0GSR3yWQcKp9bhS5P9QLvb32PenD3K4raQQCo4 TLiw== X-Gm-Message-State: ACgBeo3tERXmEpzjFU0sYA2OY3JinxEHoW+BjSBPd89yhKPd0zE7JQTO rJ+Uc/B+7C1lM8Q3gPQNhSTu8oLq6AkJNzkbt+d60TsOnSp1PQbarSG7CplTt4dd5hXkrMk6XaS /CifXHAoWKjW434Nei+dckIQgSVNKItLkAk7oJwWorS43TUzbwuYkGJa3b/iYycA3GAS8hHbL X-Google-Smtp-Source: AA6agR6moG62zPERwU4Dk26hNOp5DcD21DJAex4OmPFxWG3fnAGG1z1V+i1T2gQ2yJV0LxVv859Fqu0= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:4d8b:fb2a:2ecb:c2bb]) (user=pcc job=sendgmr) by 2002:a25:b951:0:b0:67b:93e9:1ff9 with SMTP id s17-20020a25b951000000b0067b93e91ff9mr21338230ybm.101.1660159852914; Wed, 10 Aug 2022 12:30:52 -0700 (PDT) Date: Wed, 10 Aug 2022 12:30:32 -0700 In-Reply-To: <20220810193033.1090251-1-pcc@google.com> Message-Id: <20220810193033.1090251-7-pcc@google.com> Mime-Version: 1.0 References: <20220810193033.1090251-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH v3 6/7] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220810_123054_347799_36F13246 X-CRM114-Status: GOOD ( 12.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Certain VMMs such as crosvm have features (e.g. sandboxing) that depend on being able to map guest memory as MAP_SHARED. The current restriction on sharing MAP_SHARED pages with the guest is preventing the use of those features with MTE. Now that the races between tasks concurrently clearing tags on the same page have been fixed, remove this restriction. Signed-off-by: Peter Collingbourne Reviewed-by: Catalin Marinas Reviewed-by: Steven Price --- arch/arm64/kvm/mmu.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index d54be80e31dd..fc65dc20655d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1075,14 +1075,6 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) { - /* - * VM_SHARED mappings are not allowed with MTE to avoid races - * when updating the PG_mte_tagged page flag, see - * sanitise_mte_tags for more details. - */ - if (vma->vm_flags & VM_SHARED) - return false; - return vma->vm_flags & VM_MTE_ALLOWED; } From patchwork Wed Aug 10 19:30:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12940955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3C999C00140 for ; Wed, 10 Aug 2022 19:34:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=HQG+xPa95S9SJ+zZrN+zMGl20mYtcqjWZkE5a7sP13I=; b=JPNbn4PV5rwEMLOmy1zLLOlY7P HKAzZpQInaUoSaKrfD4Hn9QSMrCaury2A/sq9w8WDx9QpZJJh2HQ6DpiGxhS3lRLL1bwgSAXb0ibg 5EH6VhH78CT3+Q5TS+pIu4p+2uhoCq15Fj3YNCOgXObXeIE7yMx3MMYFTYx1m6oqY+TmmtrcKETUT PqGKmCjDBdHwCdv1r+HRNx9k8JpTwmnjC+4nlwdm+Rb5ABVSyn0P4GfleE0bimBnlAkC82sg5mjp5 NutBg9d7Ej+enQcPNeaDOo0O2oXo4jNeFH5gR/ZYzU493AliH+P7xPmM+rqZap+P98v4DV9IekoyJ Br4ckl3g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrSM-00EDWd-3q; Wed, 10 Aug 2022 19:33:30 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrPw-00EBoP-5i for linux-arm-kernel@lists.infradead.org; Wed, 10 Aug 2022 19:31:01 +0000 Received: by mail-pl1-x649.google.com with SMTP id d3-20020a170902cec300b0016f04e2e730so10173161plg.1 for ; Wed, 10 Aug 2022 12:30:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=4ClZUMx8TnkVPiWPppNyKCT9yBDdrvfugyfW4aVoXbY=; b=eth5rcymjcTo6VYe4CYLR2cfYrhUHSgDwphDXVe5M9W77uVjamZK3zEQPkOaz8MeS4 v18WcYItKk9ZR7udLSpcf89ncUrsRl8E52JO0GZ6cpgSHrfsX2fYCvuid4bHHF8HPf6M VsybieO28xH93+w4aUORM8rnRRWfPK8qg4uhN+vSJcDmX/KTUojSyVAaty4iu025crUn vhfdnOeVpgVQ0zQfeT5i0V29ivvaM60SPNcLfBkYjlU1gOQKl/2a8hwlEyB9SnHFD9wY ljJ/GYwGbhbF9yHI2U65mevgovogtg33yQx8TLgb2O2HkI/g6wZ4HGrZlviH543LdqlE x1eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=4ClZUMx8TnkVPiWPppNyKCT9yBDdrvfugyfW4aVoXbY=; b=XYQ+w8gYH37Y2xwEPHG1ywpf0S+BgaMCyZgxEUAFUZ5kDr1yPQwYURRYHA8aFVBhyw yoq3f8cco6dAuOP9iU8ZcbViNLx+oUrWUksPpR/5NTEYnrtGG7rIKJV+yJtduaA14GpX SS/CZ0x2NAY6y75zhlKNeTw4ZqGrpGEZZtotiZ/rPI9tguz7fF9OqvzDwa8cJhDzjTXc xOoAPZqwDAsBQsaefc4/vn1oM0wb3Q6O7rEmAKNgYxrZOjNdONr1J0dxosZv6P0n1FIZ xfl0sjx5evcddMaguNd3ee+ZZoO/df8rtG5Ct0n0F7dWOMWj8+wSVHbGNrRpMtoed941 jaIg== X-Gm-Message-State: ACgBeo20uV3RFv9WZgJrjj3+lianiHmjDPVYp+tD2j2vDzu/XOoSUJWo VlsJL7q4UZly9BcyzkLx2FNDGT5MmJXsRGteSEMoplTA15T+o4DfJvOPHDm1p8JZlWwYQoTNwru tW+H1Gud55u2NuT4LDJ0g5NbwS6Sm0FAc1BPs1Xa7JPJQSh65DdB6blq1JW+hHiEYH1CtPg9q X-Google-Smtp-Source: AA6agR5ruBssTN91JyZyYKhtoX72CT7taSbOJaMPbA/aSzLBfq1S0116heTn2cGcPhGw5MM6Pcu1B5w= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:4d8b:fb2a:2ecb:c2bb]) (user=pcc job=sendgmr) by 2002:a17:90a:fe10:b0:1f3:1de7:fe1b with SMTP id ck16-20020a17090afe1000b001f31de7fe1bmr5057895pjb.189.1660159855775; Wed, 10 Aug 2022 12:30:55 -0700 (PDT) Date: Wed, 10 Aug 2022 12:30:33 -0700 In-Reply-To: <20220810193033.1090251-1-pcc@google.com> Message-Id: <20220810193033.1090251-8-pcc@google.com> Mime-Version: 1.0 References: <20220810193033.1090251-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH v3 7/7] Documentation: document the ABI changes for KVM_CAP_ARM_MTE From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220810_123100_239909_E5A88AE8 X-CRM114-Status: GOOD ( 12.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Document both the restriction on VM_MTE_ALLOWED mappings and the relaxation for shared mappings. Signed-off-by: Peter Collingbourne Acked-by: Catalin Marinas --- Documentation/virt/kvm/api.rst | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 9788b19f9ff7..30e0c35828ef 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -7486,8 +7486,9 @@ hibernation of the host; however the VMM needs to manually save/restore the tags as appropriate if the VM is migrated. When this capability is enabled all memory in memslots must be mapped as -not-shareable (no MAP_SHARED), attempts to create a memslot with a -MAP_SHARED mmap will result in an -EINVAL return. +``MAP_ANONYMOUS`` or with a RAM-based file mapping (``tmpfs``, ``memfd``), +attempts to create a memslot with an invalid mmap will result in an +-EINVAL return. When enabled the VMM may make use of the ``KVM_ARM_MTE_COPY_TAGS`` ioctl to perform a bulk copy of tags to/from the guest.