From patchwork Sun Sep 4 02:16:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 12965093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 065C2ECAAD5 for ; Sun, 4 Sep 2022 02:16:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5026A8016F; Sat, 3 Sep 2022 22:16:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B20E8015A; Sat, 3 Sep 2022 22:16:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37A198016F; Sat, 3 Sep 2022 22:16:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 25F998015A for ; Sat, 3 Sep 2022 22:16:09 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id EE6A3AAE1A for ; Sun, 4 Sep 2022 02:16:08 +0000 (UTC) X-FDA: 79872788016.17.E96750C Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf02.hostedemail.com (Postfix) with ESMTP id 72BBD80061 for ; Sun, 4 Sep 2022 02:16:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662257767; x=1693793767; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GVe4dBjvMQpZmNRdpmLWUnxXhArGe+l3tmmLFYYezBs=; b=D/AvnE8inyBQD37iVm8hix/Uz1vRJpapsj7//iKz+kb49uFLCzEV3lDi fI/+ikA3P19Afp4cdb0ba69UM1RYpXvUBX5x1I7wAHPdrbjwO1aC1/KaF UTCwe8IeJ0ADNCYKEdHPIqk4TDXJ273NKsT2PAtgdtfBf1EFyVLONU0ky AharUtZ/4EyhgX46HB3xKN4G9lMhmu9w5P/UMf22qWxwWWciSgGcck40h igQ7ybcXfAyJbU/L3fu/2O8W1kajde/jdw4ksKyOqTPTk6jMKhIC0gIIC wnyPwTLq7c4ynDbqQu9EvGWThUVD8irB8ZvHgB4DVVgGiXFM8QibYYitw A==; X-IronPort-AV: E=McAfee;i="6500,9779,10459"; a="279219081" X-IronPort-AV: E=Sophos;i="5.93,288,1654585200"; d="scan'208";a="279219081" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Sep 2022 19:16:06 -0700 X-IronPort-AV: E=Sophos;i="5.93,288,1654585200"; d="scan'208";a="702523876" Received: from pg4-mobl3.amr.corp.intel.com (HELO dwillia2-xfh.jf.intel.com) ([10.212.132.198]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Sep 2022 19:16:06 -0700 Subject: [PATCH 01/13] fsdax: Rename "busy page" to "pinned page" From: Dan Williams To: akpm@linux-foundation.org Cc: Matthew Wilcox , Jan Kara , "Darrick J. Wong" , Jason Gunthorpe , Christoph Hellwig , linux-mm@kvack.org, nvdimm@lists.linux.dev, linux-fsdevel@vger.kernel.org Date: Sat, 03 Sep 2022 19:16:05 -0700 Message-ID: <166225776577.2351842.7326849167823619889.stgit@dwillia2-xfh.jf.intel.com> In-Reply-To: <166225775968.2351842.11156458342486082012.stgit@dwillia2-xfh.jf.intel.com> References: <166225775968.2351842.11156458342486082012.stgit@dwillia2-xfh.jf.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662257767; a=rsa-sha256; cv=none; b=ivX5JH6wj+HhmH7FjmUQhvcdt/5lxJ74lyeWYlapw6UhK9337LxNQKL9RBpQyKun33PvIF SqKDp4T9m7XJ6HN1JXnKBk/VnG8OxHksrHZvpMuHRPMbJ9+g2WDOg3Q51GWRLh1V8f6qxd N2Tn3BXqJGCcK+jJU1qvqXjTJcxiJi0= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b="D/AvnE8i"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of dan.j.williams@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662257767; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v3vK86LsJt3COzE06buSphht4rzk/Pon0rVWIOZxx2c=; b=QZGRTpHKB6tPzqxYlDC/THXMUGwHP4qWDWOpq+Ca7JfHzcR3AyUAm7JxPgrw5dj53MfDWK qA7msjGJPVkgqLa15+ARlz8Yri8gZk9uScsVcv0KtuJ52wOn3Smk9yC+RM8ESKE1uVUXqI q4l7ZX5k8sUJXh/55mnqAJxIJCXUL0c= X-Rspam-User: Authentication-Results: imf02.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b="D/AvnE8i"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of dan.j.williams@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 72BBD80061 X-Stat-Signature: yha1zquazczwngpao7ucu4ieog9xg9hs X-HE-Tag: 1662257767-449580 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The FSDAX need to hold of truncate is for pages undergoing DMA. Replace the DAX specific "busy" terminology with the "pinned" term. This is in preparation from moving FSDAX from watching transitions of page->_refcount to '1' with observations of page_maybe_dma_pinned() returning false. Cc: Matthew Wilcox Cc: Jan Kara Cc: "Darrick J. Wong" Cc: Jason Gunthorpe Cc: Christoph Hellwig Signed-off-by: Dan Williams --- fs/dax.c | 16 ++++++++-------- fs/ext4/inode.c | 2 +- fs/fuse/dax.c | 4 ++-- fs/xfs/xfs_file.c | 2 +- fs/xfs/xfs_inode.c | 2 +- include/linux/dax.h | 10 ++++++---- 6 files changed, 19 insertions(+), 17 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index c440dcef4b1b..0f22f7b46de0 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -407,7 +407,7 @@ static void dax_disassociate_entry(void *entry, struct address_space *mapping, } } -static struct page *dax_busy_page(void *entry) +static struct page *dax_pinned_page(void *entry) { unsigned long pfn; @@ -665,7 +665,7 @@ static void *grab_mapping_entry(struct xa_state *xas, } /** - * dax_layout_busy_page_range - find first pinned page in @mapping + * dax_layout_pinned_page_range - find first pinned page in @mapping * @mapping: address space to scan for a page with ref count > 1 * @start: Starting offset. Page containing 'start' is included. * @end: End offset. Page containing 'end' is included. If 'end' is LLONG_MAX, @@ -682,7 +682,7 @@ static void *grab_mapping_entry(struct xa_state *xas, * to be able to run unmap_mapping_range() and subsequently not race * mapping_mapped() becoming true. */ -struct page *dax_layout_busy_page_range(struct address_space *mapping, +struct page *dax_layout_pinned_page_range(struct address_space *mapping, loff_t start, loff_t end) { void *entry; @@ -727,7 +727,7 @@ struct page *dax_layout_busy_page_range(struct address_space *mapping, if (unlikely(dax_is_locked(entry))) entry = get_unlocked_entry(&xas, 0); if (entry) - page = dax_busy_page(entry); + page = dax_pinned_page(entry); put_unlocked_entry(&xas, entry, WAKE_NEXT); if (page) break; @@ -742,13 +742,13 @@ struct page *dax_layout_busy_page_range(struct address_space *mapping, xas_unlock_irq(&xas); return page; } -EXPORT_SYMBOL_GPL(dax_layout_busy_page_range); +EXPORT_SYMBOL_GPL(dax_layout_pinned_page_range); -struct page *dax_layout_busy_page(struct address_space *mapping) +struct page *dax_layout_pinned_page(struct address_space *mapping) { - return dax_layout_busy_page_range(mapping, 0, LLONG_MAX); + return dax_layout_pinned_page_range(mapping, 0, LLONG_MAX); } -EXPORT_SYMBOL_GPL(dax_layout_busy_page); +EXPORT_SYMBOL_GPL(dax_layout_pinned_page); static int __dax_invalidate_entry(struct address_space *mapping, pgoff_t index, bool trunc) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 601214453c3a..bf49bf506965 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3957,7 +3957,7 @@ int ext4_break_layouts(struct inode *inode) return -EINVAL; do { - page = dax_layout_busy_page(inode->i_mapping); + page = dax_layout_pinned_page(inode->i_mapping); if (!page) return 0; diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c index e23e802a8013..e0b846f16bc5 100644 --- a/fs/fuse/dax.c +++ b/fs/fuse/dax.c @@ -443,7 +443,7 @@ static int fuse_setup_new_dax_mapping(struct inode *inode, loff_t pos, /* * Can't do inline reclaim in fault path. We call - * dax_layout_busy_page() before we free a range. And + * dax_layout_pinned_page() before we free a range. And * fuse_wait_dax_page() drops mapping->invalidate_lock and requires it. * In fault path we enter with mapping->invalidate_lock held and can't * drop it. Also in fault path we hold mapping->invalidate_lock shared @@ -671,7 +671,7 @@ static int __fuse_dax_break_layouts(struct inode *inode, bool *retry, { struct page *page; - page = dax_layout_busy_page_range(inode->i_mapping, start, end); + page = dax_layout_pinned_page_range(inode->i_mapping, start, end); if (!page) return 0; diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index c6c80265c0b2..954bb6e83796 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -822,7 +822,7 @@ xfs_break_dax_layouts( ASSERT(xfs_isilocked(XFS_I(inode), XFS_MMAPLOCK_EXCL)); - page = dax_layout_busy_page(inode->i_mapping); + page = dax_layout_pinned_page(inode->i_mapping); if (!page) return 0; diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 28493c8e9bb2..9d0bea03501e 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -3481,7 +3481,7 @@ xfs_mmaplock_two_inodes_and_break_dax_layout( * need to unlock & lock the XFS_MMAPLOCK_EXCL which is not suitable * for this nested lock case. */ - page = dax_layout_busy_page(VFS_I(ip2)->i_mapping); + page = dax_layout_pinned_page(VFS_I(ip2)->i_mapping); if (page && page_ref_count(page) != 1) { xfs_iunlock(ip2, XFS_MMAPLOCK_EXCL); xfs_iunlock(ip1, XFS_MMAPLOCK_EXCL); diff --git a/include/linux/dax.h b/include/linux/dax.h index ba985333e26b..54f099166a29 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -157,8 +157,8 @@ static inline void fs_put_dax(struct dax_device *dax_dev, void *holder) int dax_writeback_mapping_range(struct address_space *mapping, struct dax_device *dax_dev, struct writeback_control *wbc); -struct page *dax_layout_busy_page(struct address_space *mapping); -struct page *dax_layout_busy_page_range(struct address_space *mapping, loff_t start, loff_t end); +struct page *dax_layout_pinned_page(struct address_space *mapping); +struct page *dax_layout_pinned_page_range(struct address_space *mapping, loff_t start, loff_t end); dax_entry_t dax_lock_page(struct page *page); void dax_unlock_page(struct page *page, dax_entry_t cookie); dax_entry_t dax_lock_mapping_entry(struct address_space *mapping, @@ -166,12 +166,14 @@ dax_entry_t dax_lock_mapping_entry(struct address_space *mapping, void dax_unlock_mapping_entry(struct address_space *mapping, unsigned long index, dax_entry_t cookie); #else -static inline struct page *dax_layout_busy_page(struct address_space *mapping) +static inline struct page *dax_layout_pinned_page(struct address_space *mapping) { return NULL; } -static inline struct page *dax_layout_busy_page_range(struct address_space *mapping, pgoff_t start, pgoff_t nr_pages) +static inline struct page * +dax_layout_pinned_page_range(struct address_space *mapping, pgoff_t start, + pgoff_t nr_pages) { return NULL; }