From patchwork Fri Nov 4 01:10:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 13031156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 78352C4332F for ; Fri, 4 Nov 2022 01:12:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=h2CI7pJzoDUr3iUTKX85fh7gj/p+N95YhVLYN7LR4t8=; b=vrdFmqumEh52WwyFo/k3NAhq2q 46IXpimtFefr5hc4eNbI5ILuhi5FCfNSF7IXmnkTY2Gv8jX1UyFjc9cxfA8WA3Q5zRCymLINHqKDD fdx6K8lAnjx4j3yvqt6vMXVzoluRNLjqEBD+cHtkPsZ1t5k6edOBGoBbxFYSOfU41aEZiZwMLaGeP hLE9mvf52CUR/PicYg8xUxRen0cPjhbNjKNg5QQQHPgjEs2Noz0uRPzyrxCvpqCnLFKiCxn0sFAMe o8/KtLsmPrHHs0Ka5xIVUi8SHf3mBeQbCw6OXKrSJNDNS1gPGAizwYRO61/I29a5G5Be2TAfFAQ5W x1YxCeZw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlEo-0024aH-Bq; Fri, 04 Nov 2022 01:11:14 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlEh-0024WY-Dp for linux-arm-kernel@lists.infradead.org; Fri, 04 Nov 2022 01:11:08 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3691846091fso33268997b3.9 for ; Thu, 03 Nov 2022 18:11:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+t0ALZHHA6ZFAi9QaYH6w0GblkExI9N8HBofpI53uEE=; b=coTqk8yN2lIpqf4lZuerdn/huDOzxShTgkUYVz06et76b9CUvnZWNbtYisMo8FAJe+ bHk+un7ojScHCm3ttSHWveijfJgkQQ/jpEW9ZKdZyv5p9OsycrZW1cxlQuscVQzarevk V4p3UgCQesiJYEiFHSXHaD/VrBYJQTObnqJsK/fzbJ4ufyKAjS9D7JJn2vA82q7Wltzt PEQH5DmcP5NluJrA7GoVk2s2O7t3QYXkBouu9I8kK1VinDt230mAoZRqSluKQBeY4bp7 vis3SkJ4xkj0sa2+msOP+yymYx/cL417FhqniDtlLNIh+ro5KCiTLnGXeXAHWGqf5Dee HA9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+t0ALZHHA6ZFAi9QaYH6w0GblkExI9N8HBofpI53uEE=; b=DrxMh/Z7JOTrglE9Mp5jqVORINUuWI0AqtaK3J8A0mADFLmudssThW2xLlCDPPLLan oCgTh588O19hSTvsWhUHB+0CyJFt0hNuf/WDBStW+LsuiVeJZ9bAsayUheBCG12o8dh9 o9xVBcCHl2lIBtQ3HMt6EKdE059wrHxyJQ09XvuswgLNb5OiiGWCBPlv7fx+trP9qKn2 NrLIEs86+KGfRbhpiaI3Z2Y7J99ckxUUtwwrlEvA8Gz/yfGSQBJMPw3sJe+vkML2xYaX Cdk4e+RadRM82TTFxiQMBytfb20hfUjaQfSMGMy5Axgbff4HUHJDG11gMoR+ecQ4KuIM CJlA== X-Gm-Message-State: ACrzQf2fTrBY5bSYDIge2RYCaL5HMbRwpA8SwcQJeZaxUWDQjWK2yZm2 7zdRfxALeYvPZhVnYk+86JfDCYZ+8F11HjImB2meP2QimIhREkKf5zajZcjs7FniZTZaVkz7Lly +IYWxg98Qaj98XpEjUu61unl9YCYxX3R2XUZY1VdMUmcG6TWQ5ai2z90Tg7zRzMCivu5VPHBc X-Google-Smtp-Source: AMsMyM6DSVuK9mNKweU8pz1/m0WgjmOG8tIlqPQ+K5VyNJKxYPnNKrmU1a5ZoZdyX5Qe9xdFOo1rCys= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:2844:b0ec:e556:30d8]) (user=pcc job=sendgmr) by 2002:a05:690c:822:b0:35c:b671:d36a with SMTP id by2-20020a05690c082200b0035cb671d36amr203232ywb.62.1667524262070; Thu, 03 Nov 2022 18:11:02 -0700 (PDT) Date: Thu, 3 Nov 2022 18:10:34 -0700 In-Reply-To: <20221104011041.290951-1-pcc@google.com> Message-Id: <20221104011041.290951-2-pcc@google.com> Mime-Version: 1.0 References: <20221104011041.290951-1-pcc@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Subject: [PATCH v5 1/8] mm: Do not enable PG_arch_2 for all 64-bit architectures From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne , kernel test robot , Andrew Morton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221103_181107_494898_B605A521 X-CRM114-Status: GOOD ( 17.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Commit 4beba9486abd ("mm: Add PG_arch_2 page flag") introduced a new page flag for all 64-bit architectures. However, even if an architecture is 64-bit, it may still have limited spare bits in the 'flags' member of 'struct page'. This may happen if an architecture enables SPARSEMEM without SPARSEMEM_VMEMMAP as is the case with the newly added loongarch. This architecture port needs 19 more bits for the sparsemem section information and, while it is currently fine with PG_arch_2, adding any more PG_arch_* flags will trigger build-time warnings. Add a new CONFIG_ARCH_USES_PG_ARCH_X option which can be selected by architectures that need more PG_arch_* flags beyond PG_arch_1. Select it on arm64. Signed-off-by: Catalin Marinas [pcc@google.com: fix build with CONFIG_ARM64_MTE disabled] Signed-off-by: Peter Collingbourne Reported-by: kernel test robot Cc: Andrew Morton Cc: Steven Price Reviewed-by: Steven Price --- arch/arm64/Kconfig | 1 + fs/proc/page.c | 2 +- include/linux/page-flags.h | 2 +- include/trace/events/mmflags.h | 8 ++++---- mm/Kconfig | 8 ++++++++ mm/huge_memory.c | 2 +- 6 files changed, 16 insertions(+), 7 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 2d505fc0e85e..db6b80752e5d 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1966,6 +1966,7 @@ config ARM64_MTE depends on ARM64_PAN select ARCH_HAS_SUBPAGE_FAULTS select ARCH_USES_HIGH_VMA_FLAGS + select ARCH_USES_PG_ARCH_X help Memory Tagging (part of the ARMv8.5 Extensions) provides architectural support for run-time, always-on detection of diff --git a/fs/proc/page.c b/fs/proc/page.c index f2273b164535..882525c8e94c 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -219,7 +219,7 @@ u64 stable_page_flags(struct page *page) u |= kpf_copy_bit(k, KPF_PRIVATE_2, PG_private_2); u |= kpf_copy_bit(k, KPF_OWNER_PRIVATE, PG_owner_priv_1); u |= kpf_copy_bit(k, KPF_ARCH, PG_arch_1); -#ifdef CONFIG_64BIT +#ifdef CONFIG_ARCH_USES_PG_ARCH_X u |= kpf_copy_bit(k, KPF_ARCH_2, PG_arch_2); #endif diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 0b0ae5084e60..5dc7977edf9d 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -132,7 +132,7 @@ enum pageflags { PG_young, PG_idle, #endif -#ifdef CONFIG_64BIT +#ifdef CONFIG_ARCH_USES_PG_ARCH_X PG_arch_2, #endif #ifdef CONFIG_KASAN_HW_TAGS diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 11524cda4a95..4673e58a7626 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -90,10 +90,10 @@ #define IF_HAVE_PG_IDLE(flag,string) #endif -#ifdef CONFIG_64BIT -#define IF_HAVE_PG_ARCH_2(flag,string) ,{1UL << flag, string} +#ifdef CONFIG_ARCH_USES_PG_ARCH_X +#define IF_HAVE_PG_ARCH_X(flag,string) ,{1UL << flag, string} #else -#define IF_HAVE_PG_ARCH_2(flag,string) +#define IF_HAVE_PG_ARCH_X(flag,string) #endif #ifdef CONFIG_KASAN_HW_TAGS @@ -129,7 +129,7 @@ IF_HAVE_PG_UNCACHED(PG_uncached, "uncached" ) \ IF_HAVE_PG_HWPOISON(PG_hwpoison, "hwpoison" ) \ IF_HAVE_PG_IDLE(PG_young, "young" ) \ IF_HAVE_PG_IDLE(PG_idle, "idle" ) \ -IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) \ +IF_HAVE_PG_ARCH_X(PG_arch_2, "arch_2" ) \ IF_HAVE_PG_SKIP_KASAN_POISON(PG_skip_kasan_poison, "skip_kasan_poison") #define show_page_flags(flags) \ diff --git a/mm/Kconfig b/mm/Kconfig index b0b56c33f2ed..8e9e26ca472c 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1005,6 +1005,14 @@ config ARCH_USES_HIGH_VMA_FLAGS config ARCH_HAS_PKEYS bool +config ARCH_USES_PG_ARCH_X + bool + help + Enable the definition of PG_arch_x page flags with x > 1. Only + suitable for 64-bit architectures with CONFIG_FLATMEM or + CONFIG_SPARSEMEM_VMEMMAP enabled, otherwise there may not be + enough room for additional bits in page->flags. + config VM_EVENT_COUNTERS default y bool "Enable VM event counters for /proc/vmstat" if EXPERT diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1d47b3f7b877..5d87dc4611b9 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2401,7 +2401,7 @@ static void __split_huge_page_tail(struct page *head, int tail, (1L << PG_workingset) | (1L << PG_locked) | (1L << PG_unevictable) | -#ifdef CONFIG_64BIT +#ifdef CONFIG_ARCH_USES_PG_ARCH_X (1L << PG_arch_2) | #endif (1L << PG_dirty) | From patchwork Fri Nov 4 01:10:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 13031157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2A3A3C433FE for ; Fri, 4 Nov 2022 01:12:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zEf1e5T8v8X8QM/gNG6WAFjWKk84rApI0sjZxijqmU8=; b=Dfc/IQQTWiB8KNkRHjmEbMSGHN u91EKPkusAjUdTCNtyKu1bZXLZQ0L6ujRwu4fte9SIq1g/jYiSYj07jkt5bXhKs/HQa7a9psePeXe ZZNBE/4aEePjWyS27fsm4bWc8HKEv6+zQbBURTrx0m9EPf7+Y4JLSR2ytGkkO0OQDpjLwQpqdfrol ZvfSsyMff6YsKaX3tbD7siycdsZxMgCK5nSuUewOiUAuX23xN7TGqZMPe39O6usvJcE48zT3xWR+J GWg8FzV3F38lM8HlHa9K+k80p7Pq9XtBbWhWaJxiek6tkWPBfMUh6qx7cQ6d5sqKKHqI75/+rxIBI MQtAEyXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlEw-0024cN-R8; Fri, 04 Nov 2022 01:11:22 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlEi-0024Wu-Rz for linux-arm-kernel@lists.infradead.org; Fri, 04 Nov 2022 01:11:10 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id l188-20020a2525c5000000b006cbcff0ab41so3652951ybl.4 for ; Thu, 03 Nov 2022 18:11:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=enKd+t4eJv1iCVLwU/CpdgqOFVo2JUX/2BgM5ys+4Is=; b=ROwcrVfpbkHzp7VSmZVNOZJLwQa3cQ+T6sLG9zIhwaI5ZspQ86U37ISNiHCDGpGP+B 2C2pEItFIGDkQS/V7jwmaAxaqpXWeM+RJ2vm8Z5QAk2QeI08VAfwyDr6qhO5LwJ3OXZU fhLLp9LwXBf3JQ/IqUmOCBvEzztdh7ltWIwn5oVPbwf6L6xwJnK738Puxt6IVqATlkt1 NhqsBivKF5zAiBJa/nk3uA/IgYBRbAF40JhuAjK/59HnOkIzYlEaj/mczIZl7k/hz6CD JpZuwjHYrkLIHPTluZvg1e9L7vrEOsuMQ1Kh8KjV81zb28LGQABce6/iJrbzEvkaCJwm 7G2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=enKd+t4eJv1iCVLwU/CpdgqOFVo2JUX/2BgM5ys+4Is=; b=CyZYYQFWU0Cs8Hy0uh49x8JSniPqRPugBi0GnzLcJDvcsuzj0WrGS5MqXkM14wczri 3AaO7Km0ZdIcJ0GjudFkGAHU7LVGkTO1XGV6IzoKjHfKgwAYwUmVpmgqToUtkj+vspzN c34MkPbR7Estw34nPct5zggTKUiEEfRp3Z7sbr4fobHNbwdhqiGEZugj5r+3kCF+AWmm cxl7jfjs7l0WBCbJQL7vBNzUf8pHAA1ig/67iLDFcbaPmWWD4yoKRPSHggesJ8KhvBl6 zo3lwxCxAkUws7qZxi7L7Wh24RmKLDKwST/ZZnHyyLbQsTqr5NY44YlJq6km3pVciXvc k6aA== X-Gm-Message-State: ACrzQf1QBgRqZNx06Sv4qqwCBjIfwiXZqUjqKSP6PUsoju9J4KGrMIX2 g2hCGsjbKhJs8es8P8qPmsJx77XgGgCtqWmTC4Y+dB7CSC3IyqVPSg8abDmXlJtlBluxLeMubI2 Hsn2Hz2Um+CKujLqzLhtPxgfl8K7ccz4Il1mX6TMnE3c/L4Z553CQQ55O5Zi4Sjg8nLOwbepV X-Google-Smtp-Source: AMsMyM6OWmcMcz5nWUA2j7ban26mEKWLjlVaESKinRAIU1AT6Rl42zJnvWiPQfwAeLGfN170IwUH000= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:2844:b0ec:e556:30d8]) (user=pcc job=sendgmr) by 2002:a25:6ec3:0:b0:6b4:22f8:10df with SMTP id j186-20020a256ec3000000b006b422f810dfmr31472467ybc.444.1667524264438; Thu, 03 Nov 2022 18:11:04 -0700 (PDT) Date: Thu, 3 Nov 2022 18:10:35 -0700 In-Reply-To: <20221104011041.290951-1-pcc@google.com> Message-Id: <20221104011041.290951-3-pcc@google.com> Mime-Version: 1.0 References: <20221104011041.290951-1-pcc@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Subject: [PATCH v5 2/8] arm64: mte: Fix/clarify the PG_mte_tagged semantics From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221103_181108_949596_27FAF31C X-CRM114-Status: GOOD ( 31.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Currently the PG_mte_tagged page flag mostly means the page contains valid tags and it should be set after the tags have been cleared or restored. However, in mte_sync_tags() it is set before setting the tags to avoid, in theory, a race with concurrent mprotect(PROT_MTE) for shared pages. However, a concurrent mprotect(PROT_MTE) with a copy on write in another thread can cause the new page to have stale tags. Similarly, tag reading via ptrace() can read stale tags if the PG_mte_tagged flag is set before actually clearing/restoring the tags. Fix the PG_mte_tagged semantics so that it is only set after the tags have been cleared or restored. This is safe for swap restoring into a MAP_SHARED or CoW page since the core code takes the page lock. Add two functions to test and set the PG_mte_tagged flag with acquire and release semantics. The downside is that concurrent mprotect(PROT_MTE) on a MAP_SHARED page may cause tag loss. This is already the case for KVM guests if a VMM changes the page protection while the guest triggers a user_mem_abort(). Signed-off-by: Catalin Marinas [pcc@google.com: fix build with CONFIG_ARM64_MTE disabled] Signed-off-by: Peter Collingbourne Reviewed-by: Cornelia Huck Reviewed-by: Steven Price Cc: Will Deacon Cc: Marc Zyngier Cc: Peter Collingbourne --- arch/arm64/include/asm/mte.h | 30 ++++++++++++++++++++++++++++++ arch/arm64/include/asm/pgtable.h | 2 +- arch/arm64/kernel/cpufeature.c | 4 +++- arch/arm64/kernel/elfcore.c | 2 +- arch/arm64/kernel/hibernate.c | 2 +- arch/arm64/kernel/mte.c | 17 +++++++++++------ arch/arm64/kvm/guest.c | 4 ++-- arch/arm64/kvm/mmu.c | 4 ++-- arch/arm64/mm/copypage.c | 5 +++-- arch/arm64/mm/fault.c | 2 +- arch/arm64/mm/mteswap.c | 2 +- 11 files changed, 56 insertions(+), 18 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 760c62f8e22f..3f8199ba265a 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -37,6 +37,29 @@ void mte_free_tag_storage(char *storage); /* track which pages have valid allocation tags */ #define PG_mte_tagged PG_arch_2 +static inline void set_page_mte_tagged(struct page *page) +{ + /* + * Ensure that the tags written prior to this function are visible + * before the page flags update. + */ + smp_wmb(); + set_bit(PG_mte_tagged, &page->flags); +} + +static inline bool page_mte_tagged(struct page *page) +{ + bool ret = test_bit(PG_mte_tagged, &page->flags); + + /* + * If the page is tagged, ensure ordering with a likely subsequent + * read of the tags. + */ + if (ret) + smp_rmb(); + return ret; +} + void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t old_pte, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); @@ -56,6 +79,13 @@ size_t mte_probe_user_range(const char __user *uaddr, size_t size); /* unused if !CONFIG_ARM64_MTE, silence the compiler */ #define PG_mte_tagged 0 +static inline void set_page_mte_tagged(struct page *page) +{ +} +static inline bool page_mte_tagged(struct page *page) +{ + return false; +} static inline void mte_zero_clear_page_tags(void *addr) { } diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 4873c1d6e7d0..c6a2d8891d2a 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1048,7 +1048,7 @@ static inline void arch_swap_invalidate_area(int type) static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { if (system_supports_mte() && mte_restore_tags(entry, &folio->page)) - set_bit(PG_mte_tagged, &folio->flags); + set_page_mte_tagged(&folio->page); } #endif /* CONFIG_ARM64_MTE */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 6062454a9067..df11cfe61fcb 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2050,8 +2050,10 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) * Clear the tags in the zero page. This needs to be done via the * linear map which has the Tagged attribute. */ - if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags)) + if (!page_mte_tagged(ZERO_PAGE(0))) { mte_clear_page_tags(lm_alias(empty_zero_page)); + set_page_mte_tagged(ZERO_PAGE(0)); + } kasan_init_hw_tags_cpu(); } diff --git a/arch/arm64/kernel/elfcore.c b/arch/arm64/kernel/elfcore.c index 27ef7ad3ffd2..353009d7f307 100644 --- a/arch/arm64/kernel/elfcore.c +++ b/arch/arm64/kernel/elfcore.c @@ -47,7 +47,7 @@ static int mte_dump_tag_range(struct coredump_params *cprm, * Pages mapped in user space as !pte_access_permitted() (e.g. * PROT_EXEC only) may not have the PG_mte_tagged flag set. */ - if (!test_bit(PG_mte_tagged, &page->flags)) { + if (!page_mte_tagged(page)) { put_page(page); dump_skip(cprm, MTE_PAGE_TAG_STORAGE); continue; diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index af5df48ba915..788597a6b6a2 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -271,7 +271,7 @@ static int swsusp_mte_save_tags(void) if (!page) continue; - if (!test_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) continue; ret = save_tags(page, pfn); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 7467217c1eaf..84a085d536f8 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -41,8 +41,10 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (check_swap && is_swap_pte(old_pte)) { swp_entry_t entry = pte_to_swp_entry(old_pte); - if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) + if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) { + set_page_mte_tagged(page); return; + } } if (!pte_is_tagged) @@ -52,8 +54,10 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, * Test PG_mte_tagged again in case it was racing with another * set_pte_at(). */ - if (!test_and_set_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) { mte_clear_page_tags(page_address(page)); + set_page_mte_tagged(page); + } } void mte_sync_tags(pte_t old_pte, pte_t pte) @@ -69,9 +73,11 @@ void mte_sync_tags(pte_t old_pte, pte_t pte) /* if PG_mte_tagged is set, tags have already been initialised */ for (i = 0; i < nr_pages; i++, page++) { - if (!test_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) { mte_sync_page_tags(page, old_pte, check_swap, pte_is_tagged); + set_page_mte_tagged(page); + } } /* ensure the tags are visible before the PTE is set */ @@ -96,8 +102,7 @@ int memcmp_pages(struct page *page1, struct page *page2) * pages is tagged, set_pte_at() may zero or change the tags of the * other page via mte_sync_tags(). */ - if (test_bit(PG_mte_tagged, &page1->flags) || - test_bit(PG_mte_tagged, &page2->flags)) + if (page_mte_tagged(page1) || page_mte_tagged(page2)) return addr1 != addr2; return ret; @@ -454,7 +459,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, put_page(page); break; } - WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags)); + WARN_ON_ONCE(!page_mte_tagged(page)); /* limit access to the end of the page */ offset = offset_in_page(addr); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 2ff13a3f8479..817fdd1ab778 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1059,7 +1059,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, maddr = page_address(page); if (!write) { - if (test_bit(PG_mte_tagged, &page->flags)) + if (page_mte_tagged(page)) num_tags = mte_copy_tags_to_user(tags, maddr, MTE_GRANULES_PER_PAGE); else @@ -1076,7 +1076,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, * completed fully */ if (num_tags == MTE_GRANULES_PER_PAGE) - set_bit(PG_mte_tagged, &page->flags); + set_page_mte_tagged(page); kvm_release_pfn_dirty(pfn); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 60ee3d9f01f8..2c3759f1f2c5 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1110,9 +1110,9 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, return -EFAULT; for (i = 0; i < nr_pages; i++, page++) { - if (!test_bit(PG_mte_tagged, &page->flags)) { + if (!page_mte_tagged(page)) { mte_clear_page_tags(page_address(page)); - set_bit(PG_mte_tagged, &page->flags); + set_page_mte_tagged(page); } } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 24913271e898..731d8a35701e 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -21,9 +21,10 @@ void copy_highpage(struct page *to, struct page *from) copy_page(kto, kfrom); - if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { - set_bit(PG_mte_tagged, &to->flags); + if (system_supports_mte() && page_mte_tagged(from)) { + page_kasan_tag_reset(to); mte_copy_page_tags(kto, kfrom); + set_page_mte_tagged(to); } } EXPORT_SYMBOL(copy_highpage); diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 3e9cf9826417..e09e0344c7a7 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -938,5 +938,5 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, void tag_clear_highpage(struct page *page) { mte_zero_clear_page_tags(page_address(page)); - set_bit(PG_mte_tagged, &page->flags); + set_page_mte_tagged(page); } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index bed803d8e158..70f913205db9 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -24,7 +24,7 @@ int mte_save_tags(struct page *page) { void *tag_storage, *ret; - if (!test_bit(PG_mte_tagged, &page->flags)) + if (!page_mte_tagged(page)) return 0; tag_storage = mte_allocate_tag_storage(); From patchwork Fri Nov 4 01:10:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 13031158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7F6B2C4332F for ; Fri, 4 Nov 2022 01:12:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=efyr3NuYzV4KolPTmspolbFMO24rN4WRT9m99RmmlOg=; b=W27S7qRNK2DuKpHlcouPBGJpbJ 1LiLUNn424hR3psyQCRl1vBW8p9eWGUXJPcFGuU+LPJvhBGAdcxrYl2WOPHpaqCrGOPM97y6kHs0J KV1izbHnRhmwQxp5RGUveXtlKX+CdlCbJ+j2GrvYG0r+ji7emHpTGwPk/g1P6IAcQVOM1zPGT8g89 OvJmdvmYZZQeNvITTYGk989kqLKCcUczrVdytALgut46nlFy3p1GL+0qzhO6JcOsCOzAlmmZMQQ/s tqTOjk86hZxeqgjuleIgvvp6nX3uKxCOwwYWMSzDoBDd53OQ5+YTpIJl060GLnxzR7+lL6Js4d3us prXiIUWw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlF4-0024dy-Tk; Fri, 04 Nov 2022 01:11:31 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlEj-0024XK-Op for linux-arm-kernel@lists.infradead.org; Fri, 04 Nov 2022 01:11:11 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3691846091fso33270937b3.9 for ; Thu, 03 Nov 2022 18:11:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8wvPVNBIGqPordV4VqCP60h9NBW3OrQ+oktJeQ7thTw=; b=A+A0tEpthLB+uuGmFa5yS0IHcK7j38Tj0ebvhxSMPg5emU5TZZt2mYe9MZh7RAtr0y Ps00sS0Er2FbYA2csEblRJ1UfNxH9upYvLoLCdEkMyFMXQBFOhNaC2MCVmmTypPwANaO vJUmDW5hpXkvRUy2dJnOSd+c2zABMrXNOESfwAiNk9IO2oZPq1WieXohqRV2p1kj+aO3 w11BWq3t4PIQVWPI1rpRlsVI+HWzbrfNvVm/toq/WiprZlRhhvXKwv1BkHKFv0oDtHxC SsTRY97Q6pnqX6cUoNLCT9g9pBtiOytIELUkyKORJXfQh5+ohRzxLE7H8/lfP8qosxxz oojg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8wvPVNBIGqPordV4VqCP60h9NBW3OrQ+oktJeQ7thTw=; b=RRqGm73O+eBbbBIfrQa/7L4CU8yzEZbM7fRHwB/dxSsy5DxbwUjFMxIIWwGUrtBX5n xT7tR2rpcOHIgneLk86EKcmHvTU01NVO++F6w8nCzJOU/FBj59VtMJMayEY+jD1GBIMQ D/lyl/l4rHC80jURR5isvL9pDWHqcNm3yMU2h9s1wxd1SeWgNjGmP8BxBHDb9FZEDeS8 QXo34Udfq13UzagBqdspTleljJaApc/BsMF61GzVFmCkgM+gQF7/yx+NuF6ecgVsYQ/3 kRwZSzyKd964zlGNN5KyZ2CCWhAIZmH4r7e/U/WP7UP177Y+juANJ5hNo2EFTlB+jutC 4C2w== X-Gm-Message-State: ANoB5pkdooAo69SuKKwM+3besSqzBIUkqojEIzuK2nsGYTiq3BtjrJwM 1BOq2daNM6l537tVJQubHkXUxrO5f1PjMeP6kQhEu/zNMnYXaRhNcm6QCwU0z4C8D7xZMRJTlAd hI+j0wuKDBIqzvBEGExp13VYQl1G7d/dMibeNfn65Kfv+/9vb939lDVoRFikwYtqcTt2w/yWS X-Google-Smtp-Source: AA0mqf6aCG6H1JcdeUQvIHj3QDAK9Uph6TKu2Ix1eLoqmjpSVgiU9fa0kE/LaoMn+PfmE2uXJxHiEHw= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:2844:b0ec:e556:30d8]) (user=pcc job=sendgmr) by 2002:a25:234f:0:b0:6d2:f2e8:c131 with SMTP id j76-20020a25234f000000b006d2f2e8c131mr31156ybj.418.1667524266424; Thu, 03 Nov 2022 18:11:06 -0700 (PDT) Date: Thu, 3 Nov 2022 18:10:36 -0700 In-Reply-To: <20221104011041.290951-1-pcc@google.com> Message-Id: <20221104011041.290951-4-pcc@google.com> Mime-Version: 1.0 References: <20221104011041.290951-1-pcc@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Subject: [PATCH v5 3/8] KVM: arm64: Simplify the sanitise_mte_tags() logic From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221103_181109_874758_AF983EFD X-CRM114-Status: GOOD ( 22.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Currently sanitise_mte_tags() checks if it's an online page before attempting to sanitise the tags. Such detection should be done in the caller via the VM_MTE_ALLOWED vma flag. Since kvm_set_spte_gfn() does not have the vma, leave the page unmapped if not already tagged. Tag initialisation will be done on a subsequent access fault in user_mem_abort(). Signed-off-by: Catalin Marinas [pcc@google.com: fix the page initializer] Signed-off-by: Peter Collingbourne Reviewed-by: Steven Price Cc: Will Deacon Cc: Marc Zyngier Cc: Peter Collingbourne Reviewed-by: Cornelia Huck --- arch/arm64/kvm/mmu.c | 40 +++++++++++++++------------------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 2c3759f1f2c5..e81bfb730629 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1091,23 +1091,14 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) * - mmap_lock protects between a VM faulting a page in and the VMM performing * an mprotect() to add VM_MTE */ -static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, - unsigned long size) +static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, + unsigned long size) { unsigned long i, nr_pages = size >> PAGE_SHIFT; - struct page *page; + struct page *page = pfn_to_page(pfn); if (!kvm_has_mte(kvm)) - return 0; - - /* - * pfn_to_online_page() is used to reject ZONE_DEVICE pages - * that may not support tags. - */ - page = pfn_to_online_page(pfn); - - if (!page) - return -EFAULT; + return; for (i = 0; i < nr_pages; i++, page++) { if (!page_mte_tagged(page)) { @@ -1115,8 +1106,6 @@ static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, set_page_mte_tagged(page); } } - - return 0; } static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, @@ -1127,7 +1116,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool write_fault, writable, force_pte = false; bool exec_fault; bool device = false; - bool shared; unsigned long mmu_seq; struct kvm *kvm = vcpu->kvm; struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; @@ -1177,8 +1165,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, vma_shift = get_vma_page_shift(vma, hva); } - shared = (vma->vm_flags & VM_SHARED); - switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: @@ -1299,12 +1285,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if (!shared) - ret = sanitise_mte_tags(kvm, pfn, vma_pagesize); - else + if ((vma->vm_flags & VM_MTE_ALLOWED) && + !(vma->vm_flags & VM_SHARED)) { + sanitise_mte_tags(kvm, pfn, vma_pagesize); + } else { ret = -EFAULT; - if (ret) goto out_unlock; + } } if (writable) @@ -1526,15 +1513,18 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { kvm_pfn_t pfn = pte_pfn(range->pte); - int ret; if (!kvm->arch.mmu.pgt) return false; WARN_ON(range->end - range->start != 1); - ret = sanitise_mte_tags(kvm, pfn, PAGE_SIZE); - if (ret) + /* + * If the page isn't tagged, defer to user_mem_abort() for sanitising + * the MTE tags. The S2 pte should have been unmapped by + * mmu_notifier_invalidate_range_end(). + */ + if (kvm_has_mte(kvm) && !page_mte_tagged(pfn_to_page(pfn))) return false; /* From patchwork Fri Nov 4 01:10:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 13031159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3C16AC4332F for ; Fri, 4 Nov 2022 01:12:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=slMJ0Ex4yarFsn9zoUQ1MDQnLvER840uNJeKUII9ngI=; b=Tqzr2hfSdw0lwt8+w99nq03mI7 TUfxixkLLrWhPfZOUCDoIAvuzDVgZZP+fzTaYyzOnubZcvLva2PxHrCQnKMi51SUtbWypZLmJMlDJ 6oTfxov7HvPyx+48O/3+S8Yz/ZwAHjvhGaIu/K+0qKUSgFUN4cLK6nG7DygFiiZz0TkSOg7xshItt FTw0qHORQk9EixWAWJ/KiLSGa5hvU3bDnEAbiGfzRvZ/s/wZVX70nZdTpSlpbSfeX/dazm00S4w/5 BZ041UJksKFDUEw0f39s1UQ3IYZCQ5aAO0DqgGd8DG7mSS15S3Od+OqX9rmnrtOhMS4Xu2Jm7c+kg hJE4eS6g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlFD-0024gK-PY; Fri, 04 Nov 2022 01:11:39 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlEk-0024Y1-OH for linux-arm-kernel@lists.infradead.org; Fri, 04 Nov 2022 01:11:12 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-368036d93abso33477107b3.18 for ; Thu, 03 Nov 2022 18:11:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rPYxlKSPJfSZtg3Q9jRN1rfy7iNW7lamdhzlzuPASYU=; b=EK2CfW/zJnZP96rUOilP2BktF4iuKcovvHFosZ0YeJeb3KtLyxVdRs+D+ycgLZnrDs /U1zBGKJpog9nLofxSIuFaqr2jVa7nL1vUHMZeBDya1ZBBx/l0v/fomj9VLAOL9zzhwn /Vpcsgo66C+hsDcSK7koi10JNVEgt5RMOa9zq41iBuvOQ9uSU0tZOW2vSe0r/Iq4/ojB sK2FOPfqs+DBQU+SdmnHptlRBihaxWXN/QOq4hb+Lj8xIpzIrJhxT+t9qkI/1UnsPe8M aaLWMX2hJsQBFMgxL5zDMHc5kkYNBCaDT1sNb4CTMZrELH7Yhw/M7cLA8ZI7KXAf6w3M 7SlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rPYxlKSPJfSZtg3Q9jRN1rfy7iNW7lamdhzlzuPASYU=; b=RwQ+LGuNI76LBdewJwkLX1XcHqonL7NzUWhRCZLXI4SRv6emG/8kZ2J/dInO6QXdyE a8SmAcOiy4MqZi4PzeV/jBfX7W6j/SVQ7m+EYA2XFt6g3f0LeUzBJH2tb8rzFyOOIZQ6 otqNyITaiyZneqIuohuaDoy/Wkbs8cbmIe5/NyTvI+0v2i70geH87kVqwmJ2VlGbi1Xb P/I7wAyCSVDdNx+rWMZa/1trwd0cnsVoS2c5tkPPbLV5EnCxbfrwdAfWyxy0o6Kxhj3s Ji+mvg+jw9fEbRboGrlzl1M4kyX8R+z5Orjgjio5/wyJR3z30/Aw0y3MI1iBp1DqXUaI Xo1A== X-Gm-Message-State: ACrzQf3sRa8FxHr0bdDXZ/4feJx7bPXwoXgj80cdBUUHNg0rtYXesEuv tjAdy6pnuFm4yl8xzGfz+x8hYCkFGxb+MkvGxW96S1gahitgTfrZOIeTi1T3Ji5/AA44vRIgKYQ z/+2vi+B7G2pvd2WQL7F0zx69zgx0GLQUcNHtwQCJeqHZnNcZ8C2/bTekWwwKzh/ywoGAMoOH X-Google-Smtp-Source: AMsMyM4lZ0RBt0V0qrjSEpyMHEmom8fBptSXrmNmTvgvtT7vTA4SH5SZJVwz3XGowF95qq3jHThGkOU= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:2844:b0ec:e556:30d8]) (user=pcc job=sendgmr) by 2002:a81:5385:0:b0:370:b29:abb3 with SMTP id h127-20020a815385000000b003700b29abb3mr30405462ywb.2.1667524269214; Thu, 03 Nov 2022 18:11:09 -0700 (PDT) Date: Thu, 3 Nov 2022 18:10:37 -0700 In-Reply-To: <20221104011041.290951-1-pcc@google.com> Message-Id: <20221104011041.290951-5-pcc@google.com> Mime-Version: 1.0 References: <20221104011041.290951-1-pcc@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Subject: [PATCH v5 4/8] mm: Add PG_arch_3 page flag From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221103_181110_814349_D2FC6E72 X-CRM114-Status: GOOD ( 12.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org As with PG_arch_2, this flag is only allowed on 64-bit architectures due to the shortage of bits available. It will be used by the arm64 MTE code in subsequent patches. Signed-off-by: Peter Collingbourne Cc: Will Deacon Cc: Marc Zyngier Cc: Steven Price [catalin.marinas@arm.com: added flag preserving in __split_huge_page_tail()] Signed-off-by: Catalin Marinas Reviewed-by: Steven Price --- fs/proc/page.c | 1 + include/linux/kernel-page-flags.h | 1 + include/linux/page-flags.h | 1 + include/trace/events/mmflags.h | 1 + mm/huge_memory.c | 1 + 5 files changed, 5 insertions(+) diff --git a/fs/proc/page.c b/fs/proc/page.c index 882525c8e94c..6249c347809a 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -221,6 +221,7 @@ u64 stable_page_flags(struct page *page) u |= kpf_copy_bit(k, KPF_ARCH, PG_arch_1); #ifdef CONFIG_ARCH_USES_PG_ARCH_X u |= kpf_copy_bit(k, KPF_ARCH_2, PG_arch_2); + u |= kpf_copy_bit(k, KPF_ARCH_3, PG_arch_3); #endif return u; diff --git a/include/linux/kernel-page-flags.h b/include/linux/kernel-page-flags.h index eee1877a354e..859f4b0c1b2b 100644 --- a/include/linux/kernel-page-flags.h +++ b/include/linux/kernel-page-flags.h @@ -18,5 +18,6 @@ #define KPF_UNCACHED 39 #define KPF_SOFTDIRTY 40 #define KPF_ARCH_2 41 +#define KPF_ARCH_3 42 #endif /* LINUX_KERNEL_PAGE_FLAGS_H */ diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 5dc7977edf9d..c50ce2812f17 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -134,6 +134,7 @@ enum pageflags { #endif #ifdef CONFIG_ARCH_USES_PG_ARCH_X PG_arch_2, + PG_arch_3, #endif #ifdef CONFIG_KASAN_HW_TAGS PG_skip_kasan_poison, diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 4673e58a7626..9db52bc4ce19 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -130,6 +130,7 @@ IF_HAVE_PG_HWPOISON(PG_hwpoison, "hwpoison" ) \ IF_HAVE_PG_IDLE(PG_young, "young" ) \ IF_HAVE_PG_IDLE(PG_idle, "idle" ) \ IF_HAVE_PG_ARCH_X(PG_arch_2, "arch_2" ) \ +IF_HAVE_PG_ARCH_X(PG_arch_3, "arch_3" ) \ IF_HAVE_PG_SKIP_KASAN_POISON(PG_skip_kasan_poison, "skip_kasan_poison") #define show_page_flags(flags) \ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5d87dc4611b9..c509011bd4a2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2403,6 +2403,7 @@ static void __split_huge_page_tail(struct page *head, int tail, (1L << PG_unevictable) | #ifdef CONFIG_ARCH_USES_PG_ARCH_X (1L << PG_arch_2) | + (1L << PG_arch_3) | #endif (1L << PG_dirty) | LRU_GEN_MASK | LRU_REFS_MASK)); From patchwork Fri Nov 4 01:10:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 13031160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0FEA5C4332F for ; Fri, 4 Nov 2022 01:12:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=mL8DrHKvR6APG5F1mVBtbdXlCnpIgOBYKu0aW0E7Ejc=; b=Ewd2PHF0W4/SDweQhvoIWOnl09 HavYzRmGvpb5fHrTZCFHxanApascGiWeIyEuqeMOncYjnKZpN9MNS5jK/h5+Ho7QJIIbBsFEoGiPd RehAhW4Bgu2cnWzEWir6OdheW2mQ5435ZPJDgU+BBh/CCa+pUsovLAyzvxJLEMEHdhbTd0wqIeiYw q8CzGs51wh+ul6ScP8jfYAuZ1RvA7YRfAK8w1hUCtB79nAAcguvwSPguYYvffyXZI0IEc4UJ3z5UY QJj1UCarDXHHjssLzCbe1rkmXMw+boRgeayxPZ4UG8YWOOdyx445eq2r6nYnjBBq5IXDCHK6MfcZ4 /fgnCizw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlFN-0024jV-8M; Fri, 04 Nov 2022 01:11:49 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlEn-0024ZB-Ov for linux-arm-kernel@lists.infradead.org; Fri, 04 Nov 2022 01:11:15 +0000 Received: by mail-yb1-xb49.google.com with SMTP id q62-20020a25d941000000b006cac1a4000cso3608111ybg.14 for ; Thu, 03 Nov 2022 18:11:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6mnzUwFKSGYwAtpE1xFv9gtpg0y2n02h+GI3HAPbvY8=; b=pIj4J6rKpj852/Q01C4mc9rc/pcuFfYL2jyQcwAyRPaOhS500Pp7/p5yskHAxgL8cP 7LCNOvi+glFZ0370iDQQ3s/iLtL/P3E8VtKx/CoJcId2j9dIkOaRAqhXBZnPIi4RoU/+ RPIHekEUkM6NWbI93zX9qLqiDmT2tZ8+tT9RgNfkSEHdKAHT9tkjFfz1yg9CNHsOWjmr ZrKPkrXnYtaH6XtudOAlXUnrq+wECy2iOuqxabTODY4lGJPstOc8v85CLULfpZJQytAs Yur4rJ02r73brUx7I3oLBdegIfsdjkYWBVSmR5OenCbtDi9rPeD21F2pv4XU4f7yL1A7 uOAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6mnzUwFKSGYwAtpE1xFv9gtpg0y2n02h+GI3HAPbvY8=; b=mZ7KpkzSWawxxzlc8D4572sajaUFrTFPctmWhp62zcLyezkXoBw+2KOJLdWvws4Vbi zt67sZlmcSlIKWZSXJQLYt2f+N52fwSRJYxg7lEWeMlIb65n2b0VkVDfTuxms36/A3FI UiiwDCzHaCLxSLBP5PqFmcup+F/M5KeIQYVX+pWigMMKzCQWtb7+7yongO5fPznIoJyL 07wLVKBDyq5V8FTphKUfEkQ/6sJwX5Ew77bHEdeu+f6ceyG/D0JJKnTMlJlBmydd2OLI j95ScmsYkThRX0vEasqNIz81nfwTE0zekH+TUus+yFb7T30aHIFNb6z0Gr8XwYC20scT H3Zg== X-Gm-Message-State: ACrzQf3VdXtwhI3ufvJRR+iKiO8C+cgDNxnVFc7EEAqW8eT22Bj707HG OIqObZihPUFvVeZZhj3gujhf+Ds2QgLUzU6fP7h/OMxckHiWnlIPSGB1WmOYo0kDWIk3soeBzz3 WLr0jtZOPhx4yFTYna5cOEsH0k7zXgJIItTLOioOA1tWYjBUH/erAIJCCiQ6JRHukkHGghWgz X-Google-Smtp-Source: AMsMyM7w0QK+CZrjKcOdzwSHLjGGRRGXKvvp9x9WLlsanFxQhoAT5hLo2bX5EUKBxxAnysQzH2/odis= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:2844:b0ec:e556:30d8]) (user=pcc job=sendgmr) by 2002:a0d:d702:0:b0:368:4ee7:b8c5 with SMTP id z2-20020a0dd702000000b003684ee7b8c5mr206390ywd.101.1667524271182; Thu, 03 Nov 2022 18:11:11 -0700 (PDT) Date: Thu, 3 Nov 2022 18:10:38 -0700 In-Reply-To: <20221104011041.290951-1-pcc@google.com> Message-Id: <20221104011041.290951-6-pcc@google.com> Mime-Version: 1.0 References: <20221104011041.290951-1-pcc@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Subject: [PATCH v5 5/8] arm64: mte: Lock a page for MTE tag initialisation From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Cornelia Huck , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino , Peter Collingbourne X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221103_181113_850903_6D5E03A5 X-CRM114-Status: GOOD ( 27.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas Initialising the tags and setting PG_mte_tagged flag for a page can race between multiple set_pte_at() on shared pages or setting the stage 2 pte via user_mem_abort(). Introduce a new PG_mte_lock flag as PG_arch_3 and set it before attempting page initialisation. Given that PG_mte_tagged is never cleared for a page, consider setting this flag to mean page unlocked and wait on this bit with acquire semantics if the page is locked: - try_page_mte_tagging() - lock the page for tagging, return true if it can be tagged, false if already tagged. No acquire semantics if it returns true (PG_mte_tagged not set) as there is no serialisation with a previous set_page_mte_tagged(). - set_page_mte_tagged() - set PG_mte_tagged with release semantics. The two-bit locking is based on Peter Collingbourne's idea. Signed-off-by: Catalin Marinas Signed-off-by: Peter Collingbourne Reviewed-by: Steven Price Cc: Will Deacon Cc: Marc Zyngier Cc: Peter Collingbourne Reviewed-by: Cornelia Huck --- arch/arm64/include/asm/mte.h | 35 +++++++++++++++++++++++++++++++- arch/arm64/include/asm/pgtable.h | 4 ++-- arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/mte.c | 12 +++-------- arch/arm64/kvm/guest.c | 16 +++++++++------ arch/arm64/kvm/mmu.c | 2 +- arch/arm64/mm/copypage.c | 2 ++ arch/arm64/mm/fault.c | 2 ++ arch/arm64/mm/mteswap.c | 14 +++++-------- 9 files changed, 60 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 3f8199ba265a..20dd06d70af5 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -25,7 +25,7 @@ unsigned long mte_copy_tags_to_user(void __user *to, void *from, unsigned long n); int mte_save_tags(struct page *page); void mte_save_page_tags(const void *page_addr, void *tag_storage); -bool mte_restore_tags(swp_entry_t entry, struct page *page); +void mte_restore_tags(swp_entry_t entry, struct page *page); void mte_restore_page_tags(void *page_addr, const void *tag_storage); void mte_invalidate_tags(int type, pgoff_t offset); void mte_invalidate_tags_area(int type); @@ -36,6 +36,8 @@ void mte_free_tag_storage(char *storage); /* track which pages have valid allocation tags */ #define PG_mte_tagged PG_arch_2 +/* simple lock to avoid multiple threads tagging the same page */ +#define PG_mte_lock PG_arch_3 static inline void set_page_mte_tagged(struct page *page) { @@ -60,6 +62,33 @@ static inline bool page_mte_tagged(struct page *page) return ret; } +/* + * Lock the page for tagging and return 'true' if the page can be tagged, + * 'false' if already tagged. PG_mte_tagged is never cleared and therefore the + * locking only happens once for page initialisation. + * + * The page MTE lock state: + * + * Locked: PG_mte_lock && !PG_mte_tagged + * Unlocked: !PG_mte_lock || PG_mte_tagged + * + * Acquire semantics only if the page is tagged (returning 'false'). + */ +static inline bool try_page_mte_tagging(struct page *page) +{ + if (!test_and_set_bit(PG_mte_lock, &page->flags)) + return true; + + /* + * The tags are either being initialised or may have been initialised + * already. Check if the PG_mte_tagged flag has been set or wait + * otherwise. + */ + smp_cond_load_acquire(&page->flags, VAL & (1UL << PG_mte_tagged)); + + return false; +} + void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t old_pte, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); @@ -86,6 +115,10 @@ static inline bool page_mte_tagged(struct page *page) { return false; } +static inline bool try_page_mte_tagging(struct page *page) +{ + return false; +} static inline void mte_zero_clear_page_tags(void *addr) { } diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index c6a2d8891d2a..c99fc9aec373 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1047,8 +1047,8 @@ static inline void arch_swap_invalidate_area(int type) #define __HAVE_ARCH_SWAP_RESTORE static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) { - if (system_supports_mte() && mte_restore_tags(entry, &folio->page)) - set_page_mte_tagged(&folio->page); + if (system_supports_mte()) + mte_restore_tags(entry, &folio->page); } #endif /* CONFIG_ARM64_MTE */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index df11cfe61fcb..afb4ffd745c3 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2050,7 +2050,7 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) * Clear the tags in the zero page. This needs to be done via the * linear map which has the Tagged attribute. */ - if (!page_mte_tagged(ZERO_PAGE(0))) { + if (try_page_mte_tagging(ZERO_PAGE(0))) { mte_clear_page_tags(lm_alias(empty_zero_page)); set_page_mte_tagged(ZERO_PAGE(0)); } diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 84a085d536f8..f5bcb0dc6267 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -41,20 +41,14 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (check_swap && is_swap_pte(old_pte)) { swp_entry_t entry = pte_to_swp_entry(old_pte); - if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) { - set_page_mte_tagged(page); - return; - } + if (!non_swap_entry(entry)) + mte_restore_tags(entry, page); } if (!pte_is_tagged) return; - /* - * Test PG_mte_tagged again in case it was racing with another - * set_pte_at(). - */ - if (!page_mte_tagged(page)) { + if (try_page_mte_tagging(page)) { mte_clear_page_tags(page_address(page)); set_page_mte_tagged(page); } diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 817fdd1ab778..5626ddb540ce 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -1068,15 +1068,19 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, clear_user(tags, MTE_GRANULES_PER_PAGE); kvm_release_pfn_clean(pfn); } else { + /* + * Only locking to serialise with a concurrent + * set_pte_at() in the VMM but still overriding the + * tags, hence ignoring the return value. + */ + try_page_mte_tagging(page); num_tags = mte_copy_tags_from_user(maddr, tags, MTE_GRANULES_PER_PAGE); - /* - * Set the flag after checking the write - * completed fully - */ - if (num_tags == MTE_GRANULES_PER_PAGE) - set_page_mte_tagged(page); + /* uaccess failed, don't leave stale tags */ + if (num_tags != MTE_GRANULES_PER_PAGE) + mte_clear_page_tags(page); + set_page_mte_tagged(page); kvm_release_pfn_dirty(pfn); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index e81bfb730629..fa2c85b93149 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1101,7 +1101,7 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, return; for (i = 0; i < nr_pages; i++, page++) { - if (!page_mte_tagged(page)) { + if (try_page_mte_tagging(page)) { mte_clear_page_tags(page_address(page)); set_page_mte_tagged(page); } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 731d8a35701e..8dd5a8fe64b4 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -23,6 +23,8 @@ void copy_highpage(struct page *to, struct page *from) if (system_supports_mte() && page_mte_tagged(from)) { page_kasan_tag_reset(to); + /* It's a new page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(to)); mte_copy_page_tags(kto, kfrom); set_page_mte_tagged(to); } diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index e09e0344c7a7..0b1c102b89c9 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -937,6 +937,8 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, void tag_clear_highpage(struct page *page) { + /* Newly allocated page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(page)); mte_zero_clear_page_tags(page_address(page)); set_page_mte_tagged(page); } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index 70f913205db9..cd508ba80ab1 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -46,21 +46,17 @@ int mte_save_tags(struct page *page) return 0; } -bool mte_restore_tags(swp_entry_t entry, struct page *page) +void mte_restore_tags(swp_entry_t entry, struct page *page) { void *tags = xa_load(&mte_pages, entry.val); if (!tags) - return false; + return; - /* - * Test PG_mte_tagged again in case it was racing with another - * set_pte_at(). - */ - if (!test_and_set_bit(PG_mte_tagged, &page->flags)) + if (try_page_mte_tagging(page)) { mte_restore_page_tags(page_address(page), tags); - - return true; + set_page_mte_tagged(page); + } } void mte_invalidate_tags(int type, pgoff_t offset) From patchwork Fri Nov 4 01:10:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 13031161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B3F23C433FE for ; Fri, 4 Nov 2022 01:13:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=e8fLa+qRsvRsz5Gy0HWNyFjehOmAKHWam9Tl4J45MvU=; b=LVwBccZlL6lGo7Lk1wzwAnp2Vv RrO2y955v58Ysb27sj423IqpsMJQT8PHujRd0GAJierK6K7eHQYe5hkVJ528NXaEtyiGkXJoo6R21 RRs/Zfeu6baRbmHTcJXZCM7iYkhfowiIWzwhhkMqPTiePBCHvCGav2kX2LEPMNVvES6CS8KLKogBU S+s3LoRkJCN28IX8WkEDcWvbzn5yjD3dB5dwh4YVCg+JbidGm7bxJxwxyE5f4tIBQ7ZD9QCCSnO4U T1Izqf52j61SGXLT45ZidfguWneTZv9zRCWQckBXaX1UQG1+hxMGo88NpF/OMJRZpvKf/JyVG9yFv tqycf2jg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlFg-0024rQ-49; Fri, 04 Nov 2022 01:12:08 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlEp-0024aN-SL for linux-arm-kernel@lists.infradead.org; Fri, 04 Nov 2022 01:11:17 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-36fc0644f51so33296237b3.17 for ; Thu, 03 Nov 2022 18:11:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rRZNXnAdmysY3G3Cnao0EYe4NJ/aofq02IxKvLoVlH8=; b=TAcPtBFW9pu2h04dOJP3ktSg/vWQP253G1X1Mp4rOUqMYppLxgwjzRSAU2BEPxrBuk D79mj3W8lxGP+YQF0QWhctr5CRqqVY/jWBFNL8M8Q2F0S5joY/Upr0pYFRzgCp5c0fCq 6zS9kPogeqSemrBgnSVyGQ/LvDNYHqaqXUC6w/hFbcgo1GgqT6J/ItAKbEN0OV8T5AG6 kD6Pijo34xCrWzCSI55GvKl7V03XxC4j8dDpqFXv8Doc81ApggRPbcolAirWEmENC1KF rwI5TdfslMbJDhiRERIa3DbqamzXk1McZwEGIi6poQKuJXyL46MtWR88xgYeEhD6ryAf WwrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rRZNXnAdmysY3G3Cnao0EYe4NJ/aofq02IxKvLoVlH8=; b=ZXslzggxEZQBVcl0nVbkimqQTMYLOl4aWIDjTqf0Z6OmIowasz7JKe9ekYHbJoIjAP UncTpdY0EWcALxvw268G/mTj1JZyDq4HnB3fgmxnpbxZG38GW8+nrmgtlctTfJEzUuOx 14YBaworFshdRkeMyz5LQO6mawb+h2Y/BWpWSpR74nd+73sp3GSUUcnCEVqoALh+HbmB XRDmZocDyZRMzXnzOutms7Y26YBHYPmwNvZ+KAx0iNLBn+KGDgZt2dgeeaEsGnwGrLmK LsnanWphT4hLVM597pUKRHvFPWN1jlprZZzvI3HZl1LXgiEKeArrCMiDZVWM9emtYiRm mIRA== X-Gm-Message-State: ACrzQf1WPCJOz8F2NSrld8ENzOhAgH2MoWua7aEjIe5lrmXzx5/6fvf3 UnOK8GBowep8+92ZkRsatDSh0slZ+SeYweG4m+HXv0VMZwxWXMW5Houz4ouR26yKnwanrN2hA97 8u42n/SgD4QzznaFs379FwriZ3HbkMIflCCRKZ9Aljw6BHXKeBMVvbv34n7YBVRk790nFDJWj X-Google-Smtp-Source: AMsMyM7AEG/PHRE/PAJfIvirZi/8MBo+6gmKUXgAfN1z2bLvV4rA8HnctJ86FJNToEWiHsMK90jh8VM= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:2844:b0ec:e556:30d8]) (user=pcc job=sendgmr) by 2002:a05:6902:202:b0:6b5:2297:58f2 with SMTP id j2-20020a056902020200b006b5229758f2mr32536668ybs.205.1667524274233; Thu, 03 Nov 2022 18:11:14 -0700 (PDT) Date: Thu, 3 Nov 2022 18:10:39 -0700 In-Reply-To: <20221104011041.290951-1-pcc@google.com> Message-Id: <20221104011041.290951-7-pcc@google.com> Mime-Version: 1.0 References: <20221104011041.290951-1-pcc@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Subject: [PATCH v5 6/8] KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221103_181115_932063_3ADC0EE7 X-CRM114-Status: GOOD ( 15.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Previously we allowed creating a memslot containing a private mapping that was not VM_MTE_ALLOWED, but would later reject KVM_RUN with -EFAULT. Now we reject the memory region at memslot creation time. Since this is a minor tweak to the ABI (a VMM that created one of these memslots would fail later anyway), no VMM to my knowledge has MTE support yet, and the hardware with the necessary features is not generally available, we can probably make this ABI change at this point. Signed-off-by: Peter Collingbourne Reviewed-by: Catalin Marinas Reviewed-by: Steven Price Reviewed-by: Cornelia Huck --- arch/arm64/kvm/mmu.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index fa2c85b93149..9ff9a271cf01 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1108,6 +1108,19 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, } } +static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) +{ + /* + * VM_SHARED mappings are not allowed with MTE to avoid races + * when updating the PG_mte_tagged page flag, see + * sanitise_mte_tags for more details. + */ + if (vma->vm_flags & VM_SHARED) + return false; + + return vma->vm_flags & VM_MTE_ALLOWED; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_memory_slot *memslot, unsigned long hva, unsigned long fault_status) @@ -1284,9 +1297,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) { - /* Check the VMM hasn't introduced a new VM_SHARED VMA */ - if ((vma->vm_flags & VM_MTE_ALLOWED) && - !(vma->vm_flags & VM_SHARED)) { + /* Check the VMM hasn't introduced a new disallowed VMA */ + if (kvm_vma_mte_allowed(vma)) { sanitise_mte_tags(kvm, pfn, vma_pagesize); } else { ret = -EFAULT; @@ -1730,12 +1742,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if (!vma) break; - /* - * VM_SHARED mappings are not allowed with MTE to avoid races - * when updating the PG_mte_tagged page flag, see - * sanitise_mte_tags for more details. - */ - if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) { + if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) { ret = -EINVAL; break; } From patchwork Fri Nov 4 01:10:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 13031162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C8A06C4332F for ; Fri, 4 Nov 2022 01:13:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ZbOSwO6gcSGveuyA/lT1Bi6wpi1zaj0fDMeOpTdUY4o=; b=3PCEe2lHk2ZhplKsM5kZ7vDys+ 83+VNOQ4N8Dniru3RO2KLtbLog4Q6ZcaBnSd6+app0o7sX5yZPV+MVqg60RI8PJ+3TVgwsHOceNFu +I9i17UDolgl1fFIH5QPNPi/RARDb9Ja5pMm0VhbeCu3dNs8YJy0vS9MO2fxjzQW+updp6B2GmDyl 7TyFnQW+Z2JfS9dW/mPTVfZHOKxesUKqEN4YDcoMDNkdXLvoaAHnrMHT+gfOycMtVYukOvB1L+yfL 7vk4Pmtf2L0ccDnQ3Dx54g/hVZUHev1U7UAspOHxGjXiH0f9iZs8s8j1rX3LZTw2xD2Lzdc9aVfMe lzzVDyTA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlG2-00251s-CT; Fri, 04 Nov 2022 01:12:30 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlEr-0024au-Jz for linux-arm-kernel@lists.infradead.org; Fri, 04 Nov 2022 01:11:18 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-349423f04dbso33733507b3.13 for ; Thu, 03 Nov 2022 18:11:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZNLPTbSTlYPZjBY9HcgmDn5JBJ/klYO2bN4Z3HIMVZY=; b=bwqOOb8Qn4eAWyxn4R+vdMrp7z+X9MlDWhtS2hi7whYYdWkf5FJmAUT0M76zNFvK5c GyzhxZ5ilEPl4VxyNPOGGI5u87KZWaJ1KVssckoBO832LOf4XmVPYHCNIjRZdotM/F0J qj1sLHsNomzmqZ/nB59WTDwW3KcDU6htZGJrewk9dIdhjOkArVUQSeDtjaXhXp/5erjH pGMeBPkgMHqGoItOyxvfZeHzrb0m0uG0h2Zw+/go7QH1C0Uwo43havjN/hLRO/V/Inez e9WWTq8U9kdPOCNEajZfZcONdmtEe3Q8gQJwu+MfMee3ynkyU1T5t0Zg2llvErCGZsqA Tb1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZNLPTbSTlYPZjBY9HcgmDn5JBJ/klYO2bN4Z3HIMVZY=; b=Y5UnFXOembejgfr3FyklaFxk4/KmBSkd9h3MUf4VhCn8FpTNcDSioSUtOvnus4dE/T gfE5r7Or3LVPluiwLHUVnmb+LjSqtMo2CALvwPtwzxcXFGm5DkPFqKnf/ezDdy00R7lV cpFrHBThj8eGrhjx339jsanqq9zKan4Pid4qC6bj+xsMACG3v3YvR29o7lHYdyH4r1mb RCoobSG1ScFwoSR0xXjHeuil72WEKkDokbSTeaIqy8FArMgKrjBhf+2Bqcl6wVeXHQiY orp/YFdL6c3RjMkzjmaIEOtWGFjhpjikLT/oWaCZyfVuPb8JuiInFbE2YEifEx/QHsoC rdHQ== X-Gm-Message-State: ACrzQf3OoimvaxzqXvhC1riZ8VsgLZAvqCZU/SnlB5wBiBIRm/78folG XXcxaUFU5OUwvmcHN5vSPSn+fZXYMni2FCVG9wXKyKqnebchVDirXjPpLSa+cpmot+1jXD3FUBv 7d+0AyKa5fSzjWxXkqXjTTk/LrtQnahOysGdCLHDqumnf/19mL7ssLpmH3oNoLrpc/DTifde3 X-Google-Smtp-Source: AMsMyM6nmXxJ8Gq+/aWVuDapWKSZFcWfvUUDZ0UKrzoV1+beR/aXVD1kiZHnxs3uVYWp1mJAGj9lGKM= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:2844:b0ec:e556:30d8]) (user=pcc job=sendgmr) by 2002:a0d:d705:0:b0:36f:f574:49a2 with SMTP id z5-20020a0dd705000000b0036ff57449a2mr32986451ywd.442.1667524276250; Thu, 03 Nov 2022 18:11:16 -0700 (PDT) Date: Thu, 3 Nov 2022 18:10:40 -0700 In-Reply-To: <20221104011041.290951-1-pcc@google.com> Message-Id: <20221104011041.290951-8-pcc@google.com> Mime-Version: 1.0 References: <20221104011041.290951-1-pcc@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Subject: [PATCH v5 7/8] KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221103_181117_682471_0A5CBC4F X-CRM114-Status: GOOD ( 13.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Certain VMMs such as crosvm have features (e.g. sandboxing) that depend on being able to map guest memory as MAP_SHARED. The current restriction on sharing MAP_SHARED pages with the guest is preventing the use of those features with MTE. Now that the races between tasks concurrently clearing tags on the same page have been fixed, remove this restriction. Note that this is a relaxation of the ABI. Signed-off-by: Peter Collingbourne Reviewed-by: Catalin Marinas Reviewed-by: Steven Price Reviewed-by: Cornelia Huck --- arch/arm64/kvm/mmu.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 9ff9a271cf01..b9402d8b5a90 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1110,14 +1110,6 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) { - /* - * VM_SHARED mappings are not allowed with MTE to avoid races - * when updating the PG_mte_tagged page flag, see - * sanitise_mte_tags for more details. - */ - if (vma->vm_flags & VM_SHARED) - return false; - return vma->vm_flags & VM_MTE_ALLOWED; } From patchwork Fri Nov 4 01:10:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 13031163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0F88BC433FE for ; Fri, 4 Nov 2022 01:13:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=0P2DbS8D+bmQFUteciSB1MYDiq5Cu43m4qjV9USMrzQ=; b=kW1tTGCij0mWrXk0BLaCui5qDR lFanfm3j4fmc044TERt5T/FFwFIwsSgtREUScAcp/T1XLTxjgMBkVDpkuy/ULMUC1BUpx+hcR5j7l CLGsLcVu1eyuquQ7TmaBUZ5xqW+/VZ65eefERBTjSgn68zurlMuC3RPMzVNSjPQ9ICzAhMLfptvPK qOMZ29OPJ4c9aou05tr35n+k4Lb+3Z3SBbO77Zx3a4oRzZ32VmbVWIGckn+KoZphvRmka4TDZc8Bu d7NPZn046UcNczC343gAtfk6hWx+Lo3BVMml0V68uozQwtRm+IgNEq7vYr5ydpASgm7xG84ZNzJlk qvU6nygA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlGJ-0025Af-UO; Fri, 04 Nov 2022 01:12:48 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oqlEu-0024bL-86 for linux-arm-kernel@lists.infradead.org; Fri, 04 Nov 2022 01:11:21 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id y6-20020a25b9c6000000b006c1c6161716so3628756ybj.8 for ; Thu, 03 Nov 2022 18:11:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DVyQMJ/hVRvvulqgoMW2WeWXC7WT0Qk0m1EuDV5v41A=; b=PQ01Z/eWCROdg6wptfL9xDEZzpuwG0L8Xm+zs8jhf8zCDIKQHzgHS2nYXtnxlIotzm QK0pJhfhkSTRdHdGFphaXaLW1wNck7vtuv15Cz3sELQBwWxIVmgfSSbPdHUXVm+F1NXE VACBoH2MK+PRd3kTd/87J0HS5jB+JpvZeWsto5Z+1PvCcYoxya2SDs7W63OS/SkKAqYk e4h+LHQ+zRqw6B5kSDX/xzaRtYzuRalDlXNp5zcoGD+hmpQUwJhImh32C5LnLg34BwTP LJ+wnwtZsSakJ7QaJglHzffQdgmQkl/8eLWdd7P/evkLEdw1ajSbxO19hQL9wfyUwZnS +e7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DVyQMJ/hVRvvulqgoMW2WeWXC7WT0Qk0m1EuDV5v41A=; b=bOV67IpF4GvRiIvJ/EvxnzNANgErvef6B/VZneuZnkq5NFR+0WJBY6asfuMLpsoyf0 v4ynKrOVRJD5HYXMN1xHVBRA51upbLeFupXTX8fRPKCX0lRHpImgiYoILdEzSl1twHoK /TdFt+ycm3R+0+PrTh855HZAAGNuH8V0u/4+qr+pHUkJwGaBN24IMLONWmLEiwYh6b8q +HUk0DTYKJhCAYHOU9LTimA2yI6eGnNGrjhPGvJ1J7WeXJPqRIAqdAqNr+2ZSTs2BhSl TU/9HDzirNIl9i/ctBg2/qje1n6FGO9JEhmWk5dtvg7FQTLa54NSlTxg9JqmxvUdShAM LOvw== X-Gm-Message-State: ACrzQf3PIqRgs6SeiBiTCbpvpynMTVJpOpu0hjseZl3DcIGN/gghvzYF sX5jT3SrWZ3xTg236CEzIepGQfWoJOTj3MiK/Ud26JzQ+7gd2pvcVjr/CzdqOZbx852pOUkKIIt YwaXSKUh1ovdAR4r0XGQEEHSlZhoSe53x5gzHRur2Z4NETGx5xDEN4UXZ1lCOpkJc1uXr5WSf X-Google-Smtp-Source: AMsMyM5iy+9LxEdaoz+PBI8tytwaj51kRKgjlikY6ojuKPepInTV3TsN3lEgpA/59HjnDyluctdWXjE= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:2844:b0ec:e556:30d8]) (user=pcc job=sendgmr) by 2002:a81:7094:0:b0:370:4592:dffe with SMTP id l142-20020a817094000000b003704592dffemr219576ywc.345.1667524278368; Thu, 03 Nov 2022 18:11:18 -0700 (PDT) Date: Thu, 3 Nov 2022 18:10:41 -0700 In-Reply-To: <20221104011041.290951-1-pcc@google.com> Message-Id: <20221104011041.290951-9-pcc@google.com> Mime-Version: 1.0 References: <20221104011041.290951-1-pcc@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Subject: [PATCH v5 8/8] Documentation: document the ABI changes for KVM_CAP_ARM_MTE From: Peter Collingbourne To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: Peter Collingbourne , Cornelia Huck , Catalin Marinas , Will Deacon , Marc Zyngier , Evgenii Stepanov , kvm@vger.kernel.org, Steven Price , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221103_181120_308921_85AFC7A4 X-CRM114-Status: GOOD ( 12.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Document both the restriction on VM_MTE_ALLOWED mappings and the relaxation for shared mappings. Signed-off-by: Peter Collingbourne Acked-by: Catalin Marinas Reviewed-by: Cornelia Huck --- Documentation/virt/kvm/api.rst | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index eee9f857a986..b55f80dadcfe 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -7385,8 +7385,9 @@ hibernation of the host; however the VMM needs to manually save/restore the tags as appropriate if the VM is migrated. When this capability is enabled all memory in memslots must be mapped as -not-shareable (no MAP_SHARED), attempts to create a memslot with a -MAP_SHARED mmap will result in an -EINVAL return. +``MAP_ANONYMOUS`` or with a RAM-based file mapping (``tmpfs``, ``memfd``), +attempts to create a memslot with an invalid mmap will result in an +-EINVAL return. When enabled the VMM may make use of the ``KVM_ARM_MTE_COPY_TAGS`` ioctl to perform a bulk copy of tags to/from the guest.