From patchwork Mon Aug 24 18:27:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 11733983 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 29B1113B1 for ; Mon, 24 Aug 2020 18:29:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E2DEE2078D for ; Mon, 24 Aug 2020 18:29:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E2DEE2078D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7C1C78E0011; Mon, 24 Aug 2020 14:29:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 773088E000A; Mon, 24 Aug 2020 14:29:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 662CA8E0011; Mon, 24 Aug 2020 14:29:03 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id 47D358E000A for ; Mon, 24 Aug 2020 14:29:03 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 09A1F1EF1 for ; Mon, 24 Aug 2020 18:29:03 +0000 (UTC) X-FDA: 77186298966.27.gold36_340319f27055 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id C57C63D66B for ; Mon, 24 Aug 2020 18:29:02 +0000 (UTC) X-Spam-Summary: 1,0,0,5aa4b7534caa61ed,d41d8cd98f00b204,cmainas@kernel.org,,RULES_HIT:2:41:355:379:541:800:960:966:968:973:981:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1605:1730:1747:1777:1792:1981:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2730:2895:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4051:4120:4250:4321:4385:4605:5007:6117:6119:6261:6742:7576:7875:10004:10946:11026:11232:11233:11473:11657:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12683:12740:12895:12986:13180:13229:13894:13972:14093:14096:14394:21080:21230:21451:21627:21990:30003:30034:30051:30054:30055:30069:30070:30079,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201;04ygtcncge1zngoc4hqpoopuz543cocsnsgc5khdiyiochay7kt9cdswiomrubh.w8smmq84g39biobdpyqesbmppytkh9qje8gng9ot949c8b6bzhayqcmqfhz5797.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL: none,Cus X-HE-Tag: gold36_340319f27055 X-Filterd-Recvd-Size: 9912 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Aug 2020 18:29:02 +0000 (UTC) Received: from localhost.localdomain (unknown [95.146.230.145]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 73EAC206BE; Mon, 24 Aug 2020 18:28:59 +0000 (UTC) From: Catalin Marinas To: linux-arm-kernel@lists.infradead.org Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, Will Deacon , Dave P Martin , Vincenzo Frascino , Szabolcs Nagy , Kevin Brodsky , Andrey Konovalov , Peter Collingbourne , Andrew Morton , Steven Price Subject: [PATCH v8 25/28] arm64: mte: Enable swap of tagged pages Date: Mon, 24 Aug 2020 19:27:55 +0100 Message-Id: <20200824182758.27267-26-catalin.marinas@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200824182758.27267-1-catalin.marinas@arm.com> References: <20200824182758.27267-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C57C63D66B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Steven Price When swapping pages out to disk it is necessary to save any tags that have been set, and restore when swapping back in. Make use of the new page flag (PG_ARCH_2, locally named PG_mte_tagged) to identify pages with tags. When swapping out these pages the tags are stored in memory and later restored when the pages are brought back in. Because shmem can swap pages back in without restoring the userspace PTE it is also necessary to add a hook for shmem. Signed-off-by: Steven Price [catalin.marinas@arm.com: move function prototypes to mte.h] [catalin.marinas@arm.com: drop '_tags' from arch_swap_restore_tags()] Signed-off-by: Catalin Marinas Cc: Andrew Morton Cc: Will Deacon --- Notes: v6: - Remove stale copy of include/asm-generic/pgtable.h (bad conflict resolution in v5). - check_swap should be true for nr_pages == 1. New in v4. arch/arm64/include/asm/mte.h | 8 +++ arch/arm64/include/asm/pgtable.h | 32 ++++++++++++ arch/arm64/kernel/mte.c | 19 +++++++- arch/arm64/lib/mte.S | 45 +++++++++++++++++ arch/arm64/mm/Makefile | 1 + arch/arm64/mm/mteswap.c | 83 ++++++++++++++++++++++++++++++++ 6 files changed, 187 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/mm/mteswap.c diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 7ea0c0e526d1..1c99fcadb58c 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -21,6 +21,14 @@ unsigned long mte_copy_tags_from_user(void *to, const void __user *from, unsigned long n); unsigned long mte_copy_tags_to_user(void __user *to, void *from, unsigned long n); +int mte_save_tags(struct page *page); +void mte_save_page_tags(const void *page_addr, void *tag_storage); +bool mte_restore_tags(swp_entry_t entry, struct page *page); +void mte_restore_page_tags(void *page_addr, const void *tag_storage); +void mte_invalidate_tags(int type, pgoff_t offset); +void mte_invalidate_tags_area(int type); +void *mte_allocate_tag_storage(void); +void mte_free_tag_storage(char *storage); #ifdef CONFIG_ARM64_MTE diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 057c40b6f5e0..1c46fcd873f6 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -867,6 +867,38 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, extern int kern_addr_valid(unsigned long addr); +#ifdef CONFIG_ARM64_MTE + +#define __HAVE_ARCH_PREPARE_TO_SWAP +static inline int arch_prepare_to_swap(struct page *page) +{ + if (system_supports_mte()) + return mte_save_tags(page); + return 0; +} + +#define __HAVE_ARCH_SWAP_INVALIDATE +static inline void arch_swap_invalidate_page(int type, pgoff_t offset) +{ + if (system_supports_mte()) + mte_invalidate_tags(type, offset); +} + +static inline void arch_swap_invalidate_area(int type) +{ + if (system_supports_mte()) + mte_invalidate_tags_area(type); +} + +#define __HAVE_ARCH_SWAP_RESTORE +static inline void arch_swap_restore(swp_entry_t entry, struct page *page) +{ + if (system_supports_mte() && mte_restore_tags(entry, page)) + set_bit(PG_mte_tagged, &page->flags); +} + +#endif /* CONFIG_ARM64_MTE */ + /* * On AArch64, the cache coherency is handled via the set_pte_at() function. */ diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 56e79807006c..52a0638ed967 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -10,6 +10,8 @@ #include #include #include +#include +#include #include #include @@ -18,15 +20,30 @@ #include #include +static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap) +{ + pte_t old_pte = READ_ONCE(*ptep); + + if (check_swap && is_swap_pte(old_pte)) { + swp_entry_t entry = pte_to_swp_entry(old_pte); + + if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) + return; + } + + mte_clear_page_tags(page_address(page)); +} + void mte_sync_tags(pte_t *ptep, pte_t pte) { struct page *page = pte_page(pte); long i, nr_pages = compound_nr(page); + bool check_swap = nr_pages == 1; /* if PG_mte_tagged is set, tags have already been initialised */ for (i = 0; i < nr_pages; i++, page++) { if (!test_and_set_bit(PG_mte_tagged, &page->flags)) - mte_clear_page_tags(page_address(page)); + mte_sync_page_tags(page, ptep, check_swap); } } diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 434f81d9a180..03ca6d8b8670 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -104,3 +104,48 @@ SYM_FUNC_START(mte_copy_tags_to_user) 2: sub x0, x0, x3 // update the number of tags copied ret SYM_FUNC_END(mte_copy_tags_to_user) + +/* + * Save the tags in a page + * x0 - page address + * x1 - tag storage + */ +SYM_FUNC_START(mte_save_page_tags) + multitag_transfer_size x7, x5 +1: + mov x2, #0 +2: + ldgm x5, [x0] + orr x2, x2, x5 + add x0, x0, x7 + tst x0, #0xFF // 16 tag values fit in a register, + b.ne 2b // which is 16*16=256 bytes + + str x2, [x1], #8 + + tst x0, #(PAGE_SIZE - 1) + b.ne 1b + + ret +SYM_FUNC_END(mte_save_page_tags) + +/* + * Restore the tags in a page + * x0 - page address + * x1 - tag storage + */ +SYM_FUNC_START(mte_restore_page_tags) + multitag_transfer_size x7, x5 +1: + ldr x2, [x1], #8 +2: + stgm x2, [x0] + add x0, x0, x7 + tst x0, #0xFF + b.ne 2b + + tst x0, #(PAGE_SIZE - 1) + b.ne 1b + + ret +SYM_FUNC_END(mte_restore_page_tags) diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index d91030f0ffee..5bcc9e0aa259 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -8,6 +8,7 @@ obj-$(CONFIG_PTDUMP_CORE) += dump.o obj-$(CONFIG_PTDUMP_DEBUGFS) += ptdump_debugfs.o obj-$(CONFIG_NUMA) += numa.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o +obj-$(CONFIG_ARM64_MTE) += mteswap.o KASAN_SANITIZE_physaddr.o += n obj-$(CONFIG_KASAN) += kasan_init.o diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c new file mode 100644 index 000000000000..c52c1847079c --- /dev/null +++ b/arch/arm64/mm/mteswap.c @@ -0,0 +1,83 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include +#include +#include +#include +#include + +static DEFINE_XARRAY(mte_pages); + +void *mte_allocate_tag_storage(void) +{ + /* tags granule is 16 bytes, 2 tags stored per byte */ + return kmalloc(PAGE_SIZE / 16 / 2, GFP_KERNEL); +} + +void mte_free_tag_storage(char *storage) +{ + kfree(storage); +} + +int mte_save_tags(struct page *page) +{ + void *tag_storage, *ret; + + if (!test_bit(PG_mte_tagged, &page->flags)) + return 0; + + tag_storage = mte_allocate_tag_storage(); + if (!tag_storage) + return -ENOMEM; + + mte_save_page_tags(page_address(page), tag_storage); + + /* page_private contains the swap entry.val set in do_swap_page */ + ret = xa_store(&mte_pages, page_private(page), tag_storage, GFP_KERNEL); + if (WARN(xa_is_err(ret), "Failed to store MTE tags")) { + mte_free_tag_storage(tag_storage); + return xa_err(ret); + } else if (ret) { + /* Entry is being replaced, free the old entry */ + mte_free_tag_storage(ret); + } + + return 0; +} + +bool mte_restore_tags(swp_entry_t entry, struct page *page) +{ + void *tags = xa_load(&mte_pages, entry.val); + + if (!tags) + return false; + + mte_restore_page_tags(page_address(page), tags); + + return true; +} + +void mte_invalidate_tags(int type, pgoff_t offset) +{ + swp_entry_t entry = swp_entry(type, offset); + void *tags = xa_erase(&mte_pages, entry.val); + + mte_free_tag_storage(tags); +} + +void mte_invalidate_tags_area(int type) +{ + swp_entry_t entry = swp_entry(type, 0); + swp_entry_t last_entry = swp_entry(type + 1, 0); + void *tags; + + XA_STATE(xa_state, &mte_pages, entry.val); + + xa_lock(&mte_pages); + xas_for_each(&xa_state, tags, last_entry.val - 1) { + __xa_erase(&mte_pages, xa_state.xa_index); + mte_free_tag_storage(tags); + } + xa_unlock(&mte_pages); +}