From patchwork Wed Jun 24 17:52:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 11624001 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C81441392 for ; Wed, 24 Jun 2020 17:58:39 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4B87E2078E for ; Wed, 24 Jun 2020 17:58:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="XPDE6xQ8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B87E2078E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=PCnkLuaYpkl98fz+eyGNLrCPFgD9KfvotXw4BTieLNk=; b=XPDE6xQ8VFGSQf/6YUZkwenQA CZbN8Hx47WY9bEK9t09hPUR2/QYrgdoZ+UnpwkF3tYLixP4zo0tZfceqrLLSrij64WdbRHtu//Il3 acybEjm8+l7krR1TtufBTUC6scvK4FhpEVTEqla008gbp+6Uv7SmL4SlkDvuinAuivvtcT7UKh6c3 Rpe9TeNDmwTaA8RxMgsgQIjbJJUHPB8WM2EnreoDZ7FFudf36isGGnD4OEupXd89KJV7Z5HBhyi6M KE9sWp2C9cixUqmZS/8au1QOIHfgeH2/qO6lkobjoGQvAI1OWe9P7Svf34G+LT8HzttJv1yl4cUHj JCcck64fw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jo9co-0005YA-DC; Wed, 24 Jun 2020 17:55:56 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jo9a7-00049l-ON for linux-arm-kernel@lists.infradead.org; Wed, 24 Jun 2020 17:53:11 +0000 Received: from localhost.localdomain (unknown [2.26.170.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0B751207E8; Wed, 24 Jun 2020 17:53:04 +0000 (UTC) From: Catalin Marinas To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 08/25] arm64: mte: Tags-aware copy_{user_, }highpage() implementations Date: Wed, 24 Jun 2020 18:52:27 +0100 Message-Id: <20200624175244.25837-9-catalin.marinas@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200624175244.25837-1-catalin.marinas@arm.com> References: <20200624175244.25837-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-Spam-Note: CRM114 invocation failed X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at https://www.dnswl.org/, high trust [198.145.29.99 listed in list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 HEADER_FROM_DIFFERENT_DOMAINS From and EnvelopeFrom 2nd level mail domains are different X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Szabolcs Nagy , Andrey Konovalov , Kevin Brodsky , Peter Collingbourne , linux-mm@kvack.org, Andrew Morton , Vincenzo Frascino , Will Deacon , Dave P Martin Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Vincenzo Frascino When the Memory Tagging Extension is enabled, the tags need to be preserved across page copy (e.g. for copy-on-write, page migration). Introduce MTE-aware copy_{user_,}highpage() functions to copy tags to the destination if the source page has the PG_mte_tagged flag set. copy_user_page() does not need to handle tag copying since, with this patch, it is only called by the DAX code where there is no source page structure (and no source tags). Signed-off-by: Vincenzo Frascino Co-developed-by: Catalin Marinas Signed-off-by: Catalin Marinas Cc: Will Deacon --- Notes: v5: - Handle tags in copy_highpage() (previously only copy_user_highpage()). - Ignore tags in copy_user_page() since it is only called directly by the DAX code where there is no source page structure. - Fix missing ret in mte_copy_page_tags(). v4: - Moved the tag copying to a separate function in mte.S and only called if the source page has the PG_mte_tagged flag set. arch/arm64/include/asm/mte.h | 4 ++++ arch/arm64/include/asm/page.h | 14 +++++++++++--- arch/arm64/lib/mte.S | 19 +++++++++++++++++++ arch/arm64/mm/copypage.c | 25 +++++++++++++++++++++---- 4 files changed, 55 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 1716b3d02489..b2577eee62c2 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -19,6 +19,7 @@ void mte_clear_page_tags(void *addr); #define PG_mte_tagged PG_arch_2 void mte_sync_tags(pte_t *ptep, pte_t pte); +void mte_copy_page_tags(void *kto, const void *kfrom); void flush_mte_state(void); #else @@ -29,6 +30,9 @@ void flush_mte_state(void); static inline void mte_sync_tags(pte_t *ptep, pte_t pte) { } +static inline void mte_copy_page_tags(void *kto, const void *kfrom) +{ +} static inline void flush_mte_state(void) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index c01b52add377..11734ce29702 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -15,18 +15,26 @@ #include /* for READ_IMPLIES_EXEC */ #include +struct page; +struct vm_area_struct; + extern void __cpu_clear_user_page(void *p, unsigned long user); -extern void __cpu_copy_user_page(void *to, const void *from, - unsigned long user); extern void copy_page(void *to, const void *from); extern void clear_page(void *to); +void copy_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_USER_HIGHPAGE + +void copy_highpage(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_HIGHPAGE + #define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE #define clear_user_page(addr,vaddr,pg) __cpu_clear_user_page(addr, vaddr) -#define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr) +#define copy_user_page(to, from, vaddr, pg) copy_page(to, from) typedef struct page *pgtable_t; diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index a36705640086..3c3d0edbbca3 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -5,6 +5,7 @@ #include #include +#include #include .arch armv8.5-a+memtag @@ -32,3 +33,21 @@ SYM_FUNC_START(mte_clear_page_tags) b.ne 1b ret SYM_FUNC_END(mte_clear_page_tags) + +/* + * Copy the tags from the source page to the destination one + * x0 - address of the destination page + * x1 - address of the source page + */ +SYM_FUNC_START(mte_copy_page_tags) + mov x2, x0 + mov x3, x1 + multitag_transfer_size x5, x6 +1: ldgm x4, [x3] + stgm x4, [x2] + add x2, x2, x5 + add x3, x3, x5 + tst x2, #(PAGE_SIZE - 1) + b.ne 1b + ret +SYM_FUNC_END(mte_copy_page_tags) diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 2ee7b73433a5..4a2233fa674e 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -6,18 +6,35 @@ * Copyright (C) 2012 ARM Ltd. */ +#include #include #include #include +#include +#include -void __cpu_copy_user_page(void *kto, const void *kfrom, unsigned long vaddr) +void copy_highpage(struct page *to, struct page *from) { - struct page *page = virt_to_page(kto); + struct page *kto = page_address(to); + struct page *kfrom = page_address(from); + copy_page(kto, kfrom); - flush_dcache_page(page); + + if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { + set_bit(PG_mte_tagged, &to->flags); + mte_copy_page_tags(kto, kfrom); + } +} +EXPORT_SYMBOL(copy_highpage); + +void copy_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + copy_highpage(to, from); + flush_dcache_page(to); } -EXPORT_SYMBOL_GPL(__cpu_copy_user_page); +EXPORT_SYMBOL_GPL(copy_user_highpage); void __cpu_clear_user_page(void *kaddr, unsigned long vaddr) {