From patchwork Wed Jul 6 08:20:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12907546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2999AC433EF for ; Wed, 6 Jul 2022 08:24:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B47818E0001; Wed, 6 Jul 2022 04:24:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AD0DE6B0078; Wed, 6 Jul 2022 04:24:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 923C48E0001; Wed, 6 Jul 2022 04:24:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 824FB6B0075 for ; Wed, 6 Jul 2022 04:24:32 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 64F9C80CFC for ; Wed, 6 Jul 2022 08:24:32 +0000 (UTC) X-FDA: 79655988384.22.31D0452 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf21.hostedemail.com (Postfix) with ESMTP id C39171C0027 for ; Wed, 6 Jul 2022 08:24:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657095871; x=1688631871; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MKQ/rcKgV3drdHlWuFoH8sIMBaUN6eYji0w3R3tli4U=; b=HkSexBI8iotE10lNxXJYeNtQWP+CackIKTyXVfkd9yZPbztQMxwjnJ5D euu6Apo6Fh6FA4R07YCc6Zb4UGITL7DXzsU4SJFogP3kbLnSSWZz52uSa 12IwMn3X1zGzUm1Gt/1/OMUOV9qTImUCtvJ1NErEKsTIOh5e3yOTENjmt iq+lXgvLr39r0MzGseH32qiw7V0mHV/0GcPwhZH75SqgK04mNeildGCsC cBoeVaCHjU9n6tZSiX4M7pmKUImAPFJgbj1xwL4N0guFtfHEryUU/du36 WrPXDy+KyF9YmrTpX2LTWwDGhICHIoD0hjZ2KfIxFTHZGaXY1bka11ntv A==; X-IronPort-AV: E=McAfee;i="6400,9594,10399"; a="281231823" X-IronPort-AV: E=Sophos;i="5.92,249,1650956400"; d="scan'208";a="281231823" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jul 2022 01:24:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.92,249,1650956400"; d="scan'208";a="567967895" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga006.jf.intel.com with ESMTP; 06 Jul 2022 01:24:19 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, linux-kselftest@vger.kernel.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song Subject: [PATCH v7 04/14] mm/shmem: Support memfile_notifier Date: Wed, 6 Jul 2022 16:20:06 +0800 Message-Id: <20220706082016.2603916-5-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220706082016.2603916-1-chao.p.peng@linux.intel.com> References: <20220706082016.2603916-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657095872; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TsgRq9YOyTtekWzhkX4snqB2cZKeGIqdl07XgIB+s5g=; b=K4LlmoFbt0ha4HX4OL16iYAeizqxmvqbXTA+fWvtEFppFsdUaOwyhwxZYJ1W9coNcyUvKN xE2DQhKlcCW7EbBZSEdxjP3SBgxZc8e/dVbHcUqDSblnHV1Qxgi9fCVLQi8P6xILEhzRcV icxcvjvpNsfIS0wSr34UBu4SQrOoBQI= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HkSexBI8; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf21.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657095872; a=rsa-sha256; cv=none; b=jRZNyuuWULma7bXt5lwPOE1ztMGEvULWPCBBKCXYiD/PuyUcQNX/OqT3iMUaOtwfigXlq/ vIKMHx7sBDRKa7jgs4lR85h95PRs+dbDKJDu8pkvJyK3eug2LuO3EXTHmqfbQDYOZLbD91 GUp8gyZ5QIKe9A4m+w57Z4a6rD30T5Y= X-Rspamd-Server: rspam11 X-Rspam-User: X-Stat-Signature: jyqqw3os6wjh5oxkufxp6w3dapjwott1 X-Rspamd-Queue-Id: C39171C0027 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HkSexBI8; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf21.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=chao.p.peng@linux.intel.com X-HE-Tag: 1657095871-248473 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Kirill A. Shutemov" Implement shmem as a memfile_notifier backing store. Essentially it interacts with the memfile_notifier feature flags for userspace access/page migration/page reclaiming and implements the necessary memfile_backing_store callbacks. Signed-off-by: Kirill A. Shutemov Signed-off-by: Chao Peng --- include/linux/shmem_fs.h | 2 + mm/shmem.c | 109 ++++++++++++++++++++++++++++++++++++++- 2 files changed, 110 insertions(+), 1 deletion(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index a68f982f22d1..6031c0b08d26 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -9,6 +9,7 @@ #include #include #include +#include /* inode in-kernel data */ @@ -25,6 +26,7 @@ struct shmem_inode_info { struct simple_xattrs xattrs; /* list of xattrs */ atomic_t stop_eviction; /* hold when working on inode */ struct timespec64 i_crtime; /* file creation time */ + struct memfile_node memfile_node; /* memfile node */ struct inode vfs_inode; }; diff --git a/mm/shmem.c b/mm/shmem.c index 6c8aef15a17d..627e315c3b4d 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -905,6 +905,17 @@ static struct folio *shmem_get_partial_folio(struct inode *inode, pgoff_t index) return page ? page_folio(page) : NULL; } +static void notify_invalidate(struct inode *inode, struct folio *folio, + pgoff_t start, pgoff_t end) +{ + struct shmem_inode_info *info = SHMEM_I(inode); + + start = max(start, folio->index); + end = min(end, folio->index + folio_nr_pages(folio)); + + memfile_notifier_invalidate(&info->memfile_node, start, end); +} + /* * Remove range of pages and swap entries from page cache, and free them. * If !unfalloc, truncate or punch hole; if unfalloc, undo failed fallocate. @@ -948,6 +959,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, } index += folio_nr_pages(folio) - 1; + notify_invalidate(inode, folio, start, end); + if (!unfalloc || !folio_test_uptodate(folio)) truncate_inode_folio(mapping, folio); folio_unlock(folio); @@ -1021,6 +1034,9 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, index--; break; } + + notify_invalidate(inode, folio, start, end); + VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio); truncate_inode_folio(mapping, folio); @@ -1092,6 +1108,13 @@ static int shmem_setattr(struct user_namespace *mnt_userns, (newsize > oldsize && (info->seals & F_SEAL_GROW))) return -EPERM; + if (info->memfile_node.flags & MEMFILE_F_USER_INACCESSIBLE) { + if (oldsize) + return -EPERM; + if (!PAGE_ALIGNED(newsize)) + return -EINVAL; + } + if (newsize != oldsize) { error = shmem_reacct_size(SHMEM_I(inode)->flags, oldsize, newsize); @@ -1336,6 +1359,8 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) goto redirty; if (!total_swap_pages) goto redirty; + if (info->memfile_node.flags & MEMFILE_F_UNRECLAIMABLE) + goto redirty; /* * Our capabilities prevent regular writeback or sync from ever calling @@ -2271,6 +2296,9 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma) if (ret) return ret; + if (info->memfile_node.flags & MEMFILE_F_USER_INACCESSIBLE) + return -EPERM; + /* arm64 - allow memory tagging on RAM-based files */ vma->vm_flags |= VM_MTE_ALLOWED; @@ -2306,6 +2334,7 @@ static struct inode *shmem_get_inode(struct super_block *sb, const struct inode info->i_crtime = inode->i_mtime; INIT_LIST_HEAD(&info->shrinklist); INIT_LIST_HEAD(&info->swaplist); + memfile_node_init(&info->memfile_node); simple_xattrs_init(&info->xattrs); cache_no_acl(inode); mapping_set_large_folios(inode->i_mapping); @@ -2477,6 +2506,8 @@ shmem_write_begin(struct file *file, struct address_space *mapping, if ((info->seals & F_SEAL_GROW) && pos + len > inode->i_size) return -EPERM; } + if (unlikely(info->memfile_node.flags & MEMFILE_F_USER_INACCESSIBLE)) + return -EPERM; if (unlikely(info->seals & F_SEAL_AUTO_ALLOCATE)) sgp = SGP_NOALLOC; @@ -2556,6 +2587,13 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) end_index = i_size >> PAGE_SHIFT; if (index > end_index) break; + + if (SHMEM_I(inode)->memfile_node.flags & + MEMFILE_F_USER_INACCESSIBLE) { + error = -EPERM; + break; + } + if (index == end_index) { nr = i_size & ~PAGE_MASK; if (nr <= offset) @@ -2697,6 +2735,12 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, goto out; } + if ((info->memfile_node.flags & MEMFILE_F_USER_INACCESSIBLE) && + (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len))) { + error = -EINVAL; + goto out; + } + shmem_falloc.waitq = &shmem_falloc_waitq; shmem_falloc.start = (u64)unmap_start >> PAGE_SHIFT; shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT; @@ -3806,6 +3850,20 @@ static int shmem_error_remove_page(struct address_space *mapping, return 0; } +#ifdef CONFIG_MIGRATION +static int shmem_migrate_page(struct address_space *mapping, + struct page *newpage, struct page *page, + enum migrate_mode mode) +{ + struct inode *inode = mapping->host; + struct shmem_inode_info *info = SHMEM_I(inode); + + if (info->memfile_node.flags & MEMFILE_F_UNMOVABLE) + return -EOPNOTSUPP; + return migrate_page(mapping, newpage, page, mode); +} +#endif + const struct address_space_operations shmem_aops = { .writepage = shmem_writepage, .dirty_folio = noop_dirty_folio, @@ -3814,7 +3872,7 @@ const struct address_space_operations shmem_aops = { .write_end = shmem_write_end, #endif #ifdef CONFIG_MIGRATION - .migratepage = migrate_page, + .migratepage = shmem_migrate_page, #endif .error_remove_page = shmem_error_remove_page, }; @@ -3931,6 +3989,51 @@ static struct file_system_type shmem_fs_type = { .fs_flags = FS_USERNS_MOUNT, }; +#ifdef CONFIG_MEMFILE_NOTIFIER +static struct memfile_node *shmem_lookup_memfile_node(struct file *file) +{ + struct inode *inode = file_inode(file); + + if (!shmem_mapping(inode->i_mapping)) + return NULL; + + return &SHMEM_I(inode)->memfile_node; +} + + +static int shmem_get_pfn(struct file *file, pgoff_t offset, pfn_t *pfn, + int *order) +{ + struct page *page; + int ret; + + ret = shmem_getpage(file_inode(file), offset, &page, SGP_WRITE); + if (ret) + return ret; + + unlock_page(page); + *pfn = page_to_pfn_t(page); + *order = thp_order(compound_head(page)); + return 0; +} + +static void shmem_put_pfn(pfn_t pfn) +{ + struct page *page = pfn_t_to_page(pfn); + + if (!page) + return; + + put_page(page); +} + +static struct memfile_backing_store shmem_backing_store = { + .lookup_memfile_node = shmem_lookup_memfile_node, + .get_pfn = shmem_get_pfn, + .put_pfn = shmem_put_pfn, +}; +#endif /* CONFIG_MEMFILE_NOTIFIER */ + void __init shmem_init(void) { int error; @@ -3956,6 +4059,10 @@ void __init shmem_init(void) else shmem_huge = SHMEM_HUGE_NEVER; /* just in case it was patched */ #endif + +#ifdef CONFIG_MEMFILE_NOTIFIER + memfile_register_backing_store(&shmem_backing_store); +#endif return; out1: