From patchwork Wed Aug 10 19:30:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12940952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ECCB6C00140 for ; Wed, 10 Aug 2022 19:32:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=tIrl+Kgis4+Me38qQNug6tL1CKeC7R/WjNT4XSZdo2w=; b=KKgeU4VATdkrTq0ukkrOj45DOV qUKv1w+s7cGKsAy0XTWX6nt33WtdieszqhtXoCY1do3wR/x8HvXa4uck6KSeZNmLabeCTaHjtHV/E KLBF7WokWwXN2Vs0XIJn99fa0P0bKTVHJ8cT2RYjQ3FDnB4m1X/copWYpW7ajpOR6HUsJh8jd/mpb yL/iInpuCMORRKuXdS0EM62Ey6FdBVuotDIFSz6arHpCzPkY8TnuqLpcw+K5hNrmHntGsGzDQsqwu SczbF/pxDx0ufh9wPpHfIhGWUxb4eeyRPC8/7c97c22y1iOuGBJzHJeTTMiqaJvvBOfAnKzrWa5g+ s+OndKkQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrQU-00ECCF-Vh; Wed, 10 Aug 2022 19:31:35 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oLrPl-00EBjY-0m for linux-arm-kernel@lists.infradead.org; Wed, 10 Aug 2022 19:30:50 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-32a8e40e2dcso17032727b3.23 for ; Wed, 10 Aug 2022 12:30:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=6n+rX73meZMslL3Kuxbxfk8QMa8K/bIVcCBDjIXtjis=; b=d60USk4Uafg5q2XUc05A0w5Si82C9iWWf0rbSj/dwO10J97mWIZ7fxBGUlBBTjDosN jRxICbSe4MkW63C5vebVhDqHLjUtxB3bz9PbP7VQYq1oVpKhn9qyfDimeeDSQw3q0464 av8M+9Unmnnaj+vsEac9mc1VSIL5NygzxyJWnih5AlAg5UAUG7udwgff3R2QUyw5Hr9q HM5tdoBjAzxlGAOkfMaUeEV9XLOEhFULfzrHxAZX2PSCbjaQ7TWOgs79j4c1wAhWZLpW sutaz7V5ZDLNcV4YZxsKgtgm5oje8NMZh/N9MMRZA6zIBJaTuuAwg+Mk8Q/J/EEx4Hi5 ES9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=6n+rX73meZMslL3Kuxbxfk8QMa8K/bIVcCBDjIXtjis=; b=QUsPxN33n/0Z/6y/g+c9VSnL16W1B2Z2QH4CrrwqgdaOQjSZur26H4hun8EJOZRZaw 2b1oVBg7aU/y6TiZA7pGk8AfiPj/GxbZEYw5wKx+3pE7AKN2dYZ2vodun7U7F/QZ6VnL B3hwl5OQEebClINkGAxlFqBuNqTDufD6CDOaOFjmGenN9DEiIZKrghLEQZ5/lYSQ6LHM +deGgloBk8FdzUQEWLqVITk1K3wEnM6OF650VeySiTOBI2qlDUkr3aTwIxnIoiVJe8PF CC7tGAmsDlcHLIed0ewQ9p0ApTLvnKbr9JlzTlLOxtN2+DoP5VmWWtwE3pXSK89COZdA iWJw== X-Gm-Message-State: ACgBeo0aVheRyMjpHFNB+lOwA47gwheTj+6Oj7vqV4syr5QVemWzBNRb AMg417haL3GX8sfHD9i84kMzTtbtWk6WSjo+ZV1zZDYi8U1cEckacnq+Nc1yB+wN7MhJKI4ackz 6cz8Ge2cJ2KvlCrIt3E9e7ww3SHnh321jyl7E0+9joUTGXNrxPw9DYXAUUn3gHxCa5Th+dg+Q X-Google-Smtp-Source: AA6agR7lPd3Ai394HX5crkJQ9TDI4LWx2vRyISro6R5aGNhTnV6SVNuA8Af133JQBCtUmxaOuJhK548= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:4d8b:fb2a:2ecb:c2bb]) (user=pcc job=sendgmr) by 2002:a5b:cc6:0:b0:66e:45c6:2a25 with SMTP id e6-20020a5b0cc6000000b0066e45c62a25mr26481679ybr.304.1660159847472; Wed, 10 Aug 2022 12:30:47 -0700 (PDT) Date: Wed, 10 Aug 2022 12:30:30 -0700 In-Reply-To: <20220810193033.1090251-1-pcc@google.com> Message-Id: <20220810193033.1090251-5-pcc@google.com> Mime-Version: 1.0 References: <20220810193033.1090251-1-pcc@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [PATCH v3 4/7] arm64: mte: Lock a page for MTE tag initialisation From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220810_123049_104391_17188B55 X-CRM114-Status: GOOD ( 27.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Initialising the tags and setting PG_mte_tagged flag for a page can race between multiple set_pte_at() on shared pages or setting the stage 2 pte via user_mem_abort(). Introduce a new PG_mte_lock flag as PG_arch_3 and set it before attempting page initialisation. Given that PG_mte_tagged is never cleared for a page, consider setting this flag to mean page unlocked and wait on this bit with acquire semantics if the page is locked: - try_page_mte_tagging() - lock the page for tagging, return true if it can be tagged, false if already tagged. No acquire semantics if it returns true (PG_mte_tagged not set) as there is no serialisation with a previous set_page_mte_tagged(). - set_page_mte_tagged() - set PG_mte_tagged with release semantics. The two-bit locking is based on Peter Collingbourne's idea. Signed-off-by: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Steven Price Cc: Peter Collingbourne Reviewed-by: Steven Price --- arch/arm64/include/asm/mte.h | 32 ++++++++++++++++++++++++++++++++ arch/arm64/include/asm/pgtable.h | 1 + arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/mte.c | 7 +++++-- arch/arm64/kvm/guest.c | 16 ++++++++++------ arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/copypage.c | 2 ++ arch/arm64/mm/fault.c | 2 ++ arch/arm64/mm/mteswap.c | 3 +++ 9 files changed, 57 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 46618c575eac..ea5158f6f6cb 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -36,6 +36,8 @@ void mte_free_tag_storage(char *storage); /* track which pages have valid allocation tags */ #define PG_mte_tagged PG_arch_2 +/* simple lock to avoid multiple threads tagging the same page */ +#define PG_mte_lock PG_arch_3 static inline void set_page_mte_tagged(struct page *page) { @@ -60,6 +62,32 @@ static inline bool page_mte_tagged(struct page *page) return ret; } +/* + * Lock the page for tagging and return 'true' if the page can be tagged, + * 'false' if already tagged. PG_mte_tagged is never cleared and therefore the + * locking only happens once for page initialisation. + * + * The page MTE lock state: + * + * Locked: PG_mte_lock && !PG_mte_tagged + * Unlocked: !PG_mte_lock || PG_mte_tagged + * + * Acquire semantics only if the page is tagged (returning 'false'). + */ +static inline bool try_page_mte_tagging(struct page *page) +{ + if (!test_and_set_bit(PG_mte_lock, &page->flags)) + return true; + + /* + * The tags are either being initialised or have already been initialised, + * wait for the PG_mte_tagged flag to be set. + */ + smp_cond_load_acquire(&page->flags, VAL & (1UL << PG_mte_tagged)); + + return false; +} + void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t old_pte, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); @@ -84,6 +112,10 @@ static inline bool page_mte_tagged(struct page *page) { return false; } +static inline bool try_page_mte_tagging(struct page *page) +{ + return false; +} static inline void mte_zero_clear_page_tags(void *addr) { } diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 82719fa42c0e..e6b82ad1e9e6 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1049,6 +1049,7 @@ static inline void arch_swap_invalidate_area(int type) #define __HAVE_ARCH_SWAP_RESTORE static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { + /* mte_restore_tags() takes the PG_mte_lock */ if (system_supports_mte() && mte_restore_tags(entry, &folio->page)) set_page_mte_tagged(&folio->page); } diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 562c301bbf15..33d342ddef87 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2037,7 +2037,7 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) * Clear the tags in the zero page. This needs to be done via the * linear map which has the Tagged attribute. */ - if (!page_mte_tagged(ZERO_PAGE(0))) { + if (try_page_mte_tagging(ZERO_PAGE(0))) { mte_clear_page_tags(lm_alias(empty_zero_page)); set_page_mte_tagged(ZERO_PAGE(0)); } diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 2287316639f3..634e089b5933 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -41,6 +41,7 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (check_swap && is_swap_pte(old_pte)) { swp_entry_t entry = pte_to_swp_entry(old_pte); + /* mte_restore_tags() takes the PG_mte_lock */ if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) { set_page_mte_tagged(page); return; @@ -50,8 +51,10 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (!pte_is_tagged) return; - mte_clear_page_tags(page_address(page)); - set_page_mte_tagged(page); + if (try_page_mte_tagging(page)) { + mte_clear_page_tags(page_address(page)); + set_page_mte_tagged(page); + } } void mte_sync_tags(pte_t old_pte, pte_t pte) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 3b04e69006b4..059b38e7a9e8 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1067,15 +1067,19 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, clear_user(tags, MTE_GRANULES_PER_PAGE); kvm_release_pfn_clean(pfn); } else { + /* + * Only locking to serialise with a concurrent + * set_pte_at() in the VMM but still overriding the + * tags, hence ignoring the return value. + */ + try_page_mte_tagging(page); num_tags = mte_copy_tags_from_user(maddr, tags, MTE_GRANULES_PER_PAGE); - /* - * Set the flag after checking the write - * completed fully - */ - if (num_tags == MTE_GRANULES_PER_PAGE) - set_page_mte_tagged(page); + /* uaccess failed, don't leave stale tags */ + if (num_tags != MTE_GRANULES_PER_PAGE) + mte_clear_page_tags(page); + set_page_mte_tagged(page); kvm_release_pfn_dirty(pfn); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1a3707aeb41f..750a69a97994 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1066,7 +1066,7 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, return; for (i = 0; i < nr_pages; i++, page++) { - if (!page_mte_tagged(page)) { + if (try_page_mte_tagging(page)) { mte_clear_page_tags(page_address(page)); set_page_mte_tagged(page); } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 4223389b6180..a3fa650ceca4 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -22,6 +22,8 @@ void copy_highpage(struct page *to, struct page *from) copy_page(kto, kfrom); if (system_supports_mte() && page_mte_tagged(from)) { + /* It's a new page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(to)); mte_copy_page_tags(kto, kfrom); set_page_mte_tagged(to); } diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index d095bfa16771..6407a29cab0d 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -930,6 +930,8 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, void tag_clear_highpage(struct page *page) { + /* Newly allocated page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(page)); mte_zero_clear_page_tags(page_address(page)); set_page_mte_tagged(page); } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index a78c1db23c68..cd5ad0936e16 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -53,6 +53,9 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page) if (!tags) return false; + /* racing tag restoring? */ + if (!try_page_mte_tagging(page)) + return false; mte_restore_page_tags(page_address(page), tags); return true;