From patchwork Mon Nov 8 04:05:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23E3DC433EF for ; Mon, 8 Nov 2021 04:12:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 08D1F610F8 for ; Mon, 8 Nov 2021 04:12:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235566AbhKHEPe (ORCPT ); Sun, 7 Nov 2021 23:15:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233610AbhKHEPd (ORCPT ); Sun, 7 Nov 2021 23:15:33 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 437DFC061570; Sun, 7 Nov 2021 20:12:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=CbvDYGxzFUFa2QFzUNKPHRpvgK6he53Nd2mJrxXO6ZM=; b=djcWJU6tZZk9PcxNgfvI5hDVEw a9WYSd3IcUOAEqtOgT9HGt3C+imBCAMprGoWbWS3jWWir/QXYrXFAhrY0WmLU1N5mgpz2Xxhe1tes UiNAQLtX2kVs++vFImHwyLLUJdFxSAKNMTED+1f+vyyROyzwL2lgvQyn3Hv9B/3IdrCYD9nDyBstG MgSXJGmgvW63sW5tj1XpvfR2cL7MaRcfEwO53tzWIhxdyZEP3TnQAc6ZMt4m08P7+5gqYtoTBkx7M 8DKHtYPE4x4EiOjDLNsoISvx64nmy7pbDOSKe7ScKAxkSqVVnwa3IDy9PhAzA6tWKSx16V2ot7Szs /mKgYRkw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjvwl-0089Rp-KM; Mon, 08 Nov 2021 04:08:23 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig Subject: [PATCH v2 01/28] csky,sparc: Declare flush_dcache_folio() Date: Mon, 8 Nov 2021 04:05:24 +0000 Message-Id: <20211108040551.1942823-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org These architectures do not include asm-generic/cacheflush.h so need to declare it themselves. Fixes: 08b0b0059bf1 ("mm: Add flush_dcache_folio()") Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Geert Uytterhoeven --- arch/csky/abiv1/inc/abi/cacheflush.h | 1 + arch/csky/abiv2/inc/abi/cacheflush.h | 2 ++ arch/sparc/include/asm/cacheflush_32.h | 1 + arch/sparc/include/asm/cacheflush_64.h | 1 + 4 files changed, 5 insertions(+) diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h index ed62e2066ba7..432aef1f1dc2 100644 --- a/arch/csky/abiv1/inc/abi/cacheflush.h +++ b/arch/csky/abiv1/inc/abi/cacheflush.h @@ -9,6 +9,7 @@ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); +void flush_dcache_folio(struct folio *folio); #define flush_cache_mm(mm) dcache_wbinv_all() #define flush_cache_page(vma, page, pfn) cache_wbinv_all() diff --git a/arch/csky/abiv2/inc/abi/cacheflush.h b/arch/csky/abiv2/inc/abi/cacheflush.h index a565e00c3f70..7e8bef60958c 100644 --- a/arch/csky/abiv2/inc/abi/cacheflush.h +++ b/arch/csky/abiv2/inc/abi/cacheflush.h @@ -25,6 +25,8 @@ static inline void flush_dcache_page(struct page *page) clear_bit(PG_dcache_clean, &page->flags); } +void flush_dcache_folio(struct folio *folio); + #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) #define flush_icache_page(vma, page) do { } while (0) diff --git a/arch/sparc/include/asm/cacheflush_32.h b/arch/sparc/include/asm/cacheflush_32.h index 41c6d734a474..9991c18f4980 100644 --- a/arch/sparc/include/asm/cacheflush_32.h +++ b/arch/sparc/include/asm/cacheflush_32.h @@ -37,6 +37,7 @@ void sparc_flush_page_to_ram(struct page *page); +void flush_dcache_folio(struct folio *folio); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 #define flush_dcache_page(page) sparc_flush_page_to_ram(page) #define flush_dcache_mmap_lock(mapping) do { } while (0) diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h index b9341836597e..9ab59a73c28b 100644 --- a/arch/sparc/include/asm/cacheflush_64.h +++ b/arch/sparc/include/asm/cacheflush_64.h @@ -47,6 +47,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page); void __flush_dcache_range(unsigned long start, unsigned long end); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 void flush_dcache_page(struct page *page); +void flush_dcache_folio(struct folio *folio); #define flush_icache_page(vma, pg) do { } while(0) From patchwork Mon Nov 8 04:05:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E978CC433F5 for ; Mon, 8 Nov 2021 04:14:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D10F961381 for ; Mon, 8 Nov 2021 04:14:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235686AbhKHER0 (ORCPT ); Sun, 7 Nov 2021 23:17:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233610AbhKHERX (ORCPT ); Sun, 7 Nov 2021 23:17:23 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10CF5C061570; Sun, 7 Nov 2021 20:14:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2h36aUCIYCSPoxy7eXJco89TPHc5MyLSTGaYJ8/UStY=; b=bYWXvm9IN4UAjTtuuVCbnZ5RxI waxJm/yzqmbYGTf3RWMUZJ0zMbs5qLPOli6y7t97ENTi7MdzR9DlR4mQ69DtoSKNn6DxZSw2K0xjn kI7ZSMWTn1F9ypkxMjI1Yac9ZiIObJ0feohBcbmNAroGnBz1NYLKLwLaVUcNQJWJ3/OTVUxBm/7Ks cjjJfOkMVAA8wpxlClsazJ2Lo8ZOy/qttr3LsLrI5SmdUEn9KMEYmX7RK4HY7JHPuyEO5w2qqU1Tc gQn48CPmBsGHp7r5fnYvjyF2lROIso7fOmUWP9JhXU6cZaURcNRoFvMJIgS/bJ3aXDNcvKC/Gu2zK AKpoalDA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjvzN-0089Wo-9X; Mon, 08 Nov 2021 04:11:03 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig Subject: [PATCH v2 02/28] mm: Add functions to zero portions of a folio Date: Mon, 8 Nov 2021 04:05:25 +0000 Message-Id: <20211108040551.1942823-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org These functions are wrappers around zero_user_segments(), which means that zero_user_segments() can now be called for compound pages even when CONFIG_TRANSPARENT_HUGEPAGE is disabled. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- include/linux/highmem.h | 44 ++++++++++++++++++++++++++++++++++++++--- mm/highmem.c | 2 -- 2 files changed, 41 insertions(+), 5 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 25aff0f2ed0b..c343c69bb5b4 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -231,10 +231,10 @@ static inline void tag_clear_highpage(struct page *page) * If we pass in a base or tail page, we can zero up to PAGE_SIZE. * If we pass in a head page, we can zero up to the size of the compound page. */ -#if defined(CONFIG_HIGHMEM) && defined(CONFIG_TRANSPARENT_HUGEPAGE) +#ifdef CONFIG_HIGHMEM void zero_user_segments(struct page *page, unsigned start1, unsigned end1, unsigned start2, unsigned end2); -#else /* !HIGHMEM || !TRANSPARENT_HUGEPAGE */ +#else static inline void zero_user_segments(struct page *page, unsigned start1, unsigned end1, unsigned start2, unsigned end2) @@ -254,7 +254,7 @@ static inline void zero_user_segments(struct page *page, for (i = 0; i < compound_nr(page); i++) flush_dcache_page(page + i); } -#endif /* !HIGHMEM || !TRANSPARENT_HUGEPAGE */ +#endif static inline void zero_user_segment(struct page *page, unsigned start, unsigned end) @@ -364,4 +364,42 @@ static inline void memzero_page(struct page *page, size_t offset, size_t len) kunmap_local(addr); } +/** + * folio_zero_segments() - Zero two byte ranges in a folio. + * @folio: The folio to write to. + * @start1: The first byte to zero. + * @end1: One more than the last byte in the first range. + * @start2: The first byte to zero in the second range. + * @end2: One more than the last byte in the second range. + */ +static inline void folio_zero_segments(struct folio *folio, + size_t start1, size_t end1, size_t start2, size_t end2) +{ + zero_user_segments(&folio->page, start1, end1, start2, end2); +} + +/** + * folio_zero_segment() - Zero a byte range in a folio. + * @folio: The folio to write to. + * @start: The first byte to zero. + * @end: One more than the last byte in the first range. + */ +static inline void folio_zero_segment(struct folio *folio, + size_t start, size_t end) +{ + zero_user_segments(&folio->page, start, end, 0, 0); +} + +/** + * folio_zero_range() - Zero a byte range in a folio. + * @folio: The folio to write to. + * @start: The first byte to zero. + * @length: The number of bytes to zero. + */ +static inline void folio_zero_range(struct folio *folio, + size_t start, size_t length) +{ + zero_user_segments(&folio->page, start, start + length, 0, 0); +} + #endif /* _LINUX_HIGHMEM_H */ diff --git a/mm/highmem.c b/mm/highmem.c index 88f65f155845..819d41140e5b 100644 --- a/mm/highmem.c +++ b/mm/highmem.c @@ -359,7 +359,6 @@ void kunmap_high(struct page *page) } EXPORT_SYMBOL(kunmap_high); -#ifdef CONFIG_TRANSPARENT_HUGEPAGE void zero_user_segments(struct page *page, unsigned start1, unsigned end1, unsigned start2, unsigned end2) { @@ -416,7 +415,6 @@ void zero_user_segments(struct page *page, unsigned start1, unsigned end1, BUG_ON((start1 | start2 | end1 | end2) != 0); } EXPORT_SYMBOL(zero_user_segments); -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* CONFIG_HIGHMEM */ #ifdef CONFIG_KMAP_LOCAL From patchwork Mon Nov 8 04:05:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0483C433EF for ; Mon, 8 Nov 2021 04:19:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 85E786109E for ; Mon, 8 Nov 2021 04:19:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237398AbhKHEVq (ORCPT ); Sun, 7 Nov 2021 23:21:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44492 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230232AbhKHEVg (ORCPT ); Sun, 7 Nov 2021 23:21:36 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6523C061570; Sun, 7 Nov 2021 20:18:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=D2M552hJRlzsyjLhjjW08hd3rhvcI9FywyUGRhDOmUg=; b=I2EeqPuJsPJB9V9yWlpgseuT9w 5UuAMYtDWyNlNa5+ILGyUATz0pR7ehbTwZTp90RDrxEKxt2UNpV7MIvhdcTTRPM3mUx83HbisXwhC fKGjJjj9dGptaZ6gxokeo0PdzA66ZiHUezWUfc+eT7nvAAGNGvlC1qrSEzY5vzORuQR9xlh6XmRfZ FcmV2KfwxrTwyv9d8wYlhNP6P4e/XWe8DgQHUS01Ir6Q8uJMKM96G/yDdnId0or349zLLJBRuVvud upxPmwKctx2ALHNAhW7KY/j6U+rXs6/+xix3D6VdkPzS+xwPpdMVIVf0YHSnbCIi5iq16PSOSqORK JBkKWOxg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjw1Y-0089aT-Qr; Mon, 08 Nov 2021 04:13:46 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 03/28] fs: Remove FS_THP_SUPPORT Date: Mon, 8 Nov 2021 04:05:26 +0000 Message-Id: <20211108040551.1942823-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Instead of setting a bit in the fs_flags to set a bit in the address_space, set the bit in the address_space directly. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/inode.c | 2 -- include/linux/fs.h | 1 - include/linux/pagemap.h | 16 ++++++++++++++++ mm/shmem.c | 3 ++- 4 files changed, 18 insertions(+), 4 deletions(-) diff --git a/fs/inode.c b/fs/inode.c index 9abc88d7959c..d6386b6d5a6e 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -180,8 +180,6 @@ int inode_init_always(struct super_block *sb, struct inode *inode) mapping->a_ops = &empty_aops; mapping->host = inode; mapping->flags = 0; - if (sb->s_type->fs_flags & FS_THP_SUPPORT) - __set_bit(AS_THP_SUPPORT, &mapping->flags); mapping->wb_err = 0; atomic_set(&mapping->i_mmap_writable, 0); #ifdef CONFIG_READ_ONLY_THP_FOR_FS diff --git a/include/linux/fs.h b/include/linux/fs.h index 4137a9bfae7a..3c2fcabf9d12 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2518,7 +2518,6 @@ struct file_system_type { #define FS_USERNS_MOUNT 8 /* Can be mounted by userns root */ #define FS_DISALLOW_NOTIFY_PERM 16 /* Disable fanotify permission events */ #define FS_ALLOW_IDMAP 32 /* FS has been updated to handle vfs idmappings. */ -#define FS_THP_SUPPORT 8192 /* Remove once all fs converted */ #define FS_RENAME_DOES_D_MOVE 32768 /* FS will handle d_move() during rename() internally. */ int (*init_fs_context)(struct fs_context *); const struct fs_parameter_spec *parameters; diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index db2c3e3eb1cf..471f0c422831 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -126,6 +126,22 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) m->gfp_mask = mask; } +/** + * mapping_set_large_folios() - Indicate the file supports multi-page folios. + * @mapping: The file. + * + * The filesystem should call this function in its inode constructor to + * indicate that the VFS can use multi-page folios to cache the contents + * of the file. + * + * Context: This should not be called while the inode is active as it + * is non-atomic. + */ +static inline void mapping_set_large_folios(struct address_space *mapping) +{ + __set_bit(AS_THP_SUPPORT, &mapping->flags); +} + static inline bool mapping_thp_support(struct address_space *mapping) { return test_bit(AS_THP_SUPPORT, &mapping->flags); diff --git a/mm/shmem.c b/mm/shmem.c index 23c91a8beb78..54422933fa2d 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2303,6 +2303,7 @@ static struct inode *shmem_get_inode(struct super_block *sb, const struct inode INIT_LIST_HEAD(&info->swaplist); simple_xattrs_init(&info->xattrs); cache_no_acl(inode); + mapping_set_large_folios(inode->i_mapping); switch (mode & S_IFMT) { default: @@ -3920,7 +3921,7 @@ static struct file_system_type shmem_fs_type = { .parameters = shmem_fs_parameters, #endif .kill_sb = kill_litter_super, - .fs_flags = FS_USERNS_MOUNT | FS_THP_SUPPORT, + .fs_flags = FS_USERNS_MOUNT, }; int __init shmem_init(void) From patchwork Mon Nov 8 04:05:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E523C433FE for ; Mon, 8 Nov 2021 04:21:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5ED9961452 for ; Mon, 8 Nov 2021 04:21:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234965AbhKHEYD (ORCPT ); Sun, 7 Nov 2021 23:24:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234867AbhKHEYD (ORCPT ); Sun, 7 Nov 2021 23:24:03 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CA75C061570; Sun, 7 Nov 2021 20:21:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Ha9NiX4FcPxrGO/mkLsbPFeCrnWT7y32pniOTYdVCGI=; b=ekP+no7uwAxU/8R4g7L9jpkcCv floV/sn0bcv6b/T9HTrGDgDYZ/Ts9EGKbMcDzCvFojZwcXrolTcU8K2/YD8sNrffKW2a9fn4HA3F7 MNvTtfnp2A3T1lQ/BgGgbB1hrEJGiWtWcnmX6WH9nz3k9OZGKFNDguFvmbXIakulRuZhEpo0wuzQY 2QazEL4jbAsLnMds2J7xB/ocE+joLQGoxz4LqcQlH9YjLKWdqU2R/0jPoE2JYUgVK2h9spNYoO98c aTG8HCYnTNpWV7PIm7lKy4d9SrlKjb4vYd/jO128ruIXwUBnzqAwUFRMNCNzXxjT238Pxfv7FzXK9 2f3uwXHw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjw3m-0089eb-MS; Mon, 08 Nov 2021 04:15:53 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig Subject: [PATCH v2 04/28] fs: Rename AS_THP_SUPPORT and mapping_thp_support Date: Mon, 8 Nov 2021 04:05:27 +0000 Message-Id: <20211108040551.1942823-5-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org These are now indicators of multi-page folio support, not THP support. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 471f0c422831..2ad10e1fd224 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -34,7 +34,7 @@ enum mapping_flags { AS_EXITING = 4, /* final truncate in progress */ /* writeback related tags are not used */ AS_NO_WRITEBACK_TAGS = 5, - AS_THP_SUPPORT = 6, /* THPs supported */ + AS_LARGE_FOLIO_SUPPORT = 6, }; /** @@ -139,12 +139,12 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) */ static inline void mapping_set_large_folios(struct address_space *mapping) { - __set_bit(AS_THP_SUPPORT, &mapping->flags); + __set_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); } -static inline bool mapping_thp_support(struct address_space *mapping) +static inline bool mapping_large_folio_support(struct address_space *mapping) { - return test_bit(AS_THP_SUPPORT, &mapping->flags); + return test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); } static inline int filemap_nr_thps(struct address_space *mapping) @@ -159,7 +159,7 @@ static inline int filemap_nr_thps(struct address_space *mapping) static inline void filemap_nr_thps_inc(struct address_space *mapping) { #ifdef CONFIG_READ_ONLY_THP_FOR_FS - if (!mapping_thp_support(mapping)) + if (!mapping_large_folio_support(mapping)) atomic_inc(&mapping->nr_thps); #else WARN_ON_ONCE(1); @@ -169,7 +169,7 @@ static inline void filemap_nr_thps_inc(struct address_space *mapping) static inline void filemap_nr_thps_dec(struct address_space *mapping) { #ifdef CONFIG_READ_ONLY_THP_FOR_FS - if (!mapping_thp_support(mapping)) + if (!mapping_large_folio_support(mapping)) atomic_dec(&mapping->nr_thps); #else WARN_ON_ONCE(1); From patchwork Mon Nov 8 04:05:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46A79C433F5 for ; Mon, 8 Nov 2021 04:23:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2B02961265 for ; Mon, 8 Nov 2021 04:23:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237415AbhKHE0k (ORCPT ); Sun, 7 Nov 2021 23:26:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234867AbhKHE0k (ORCPT ); Sun, 7 Nov 2021 23:26:40 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 88AAEC061570; Sun, 7 Nov 2021 20:23:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2JSXpb5tUGpO4uv4zOD//k8Fp1xchYDz8xInP/Cvu8c=; b=Dzn/hpcE+ClIXi70b6ktdmN7z/ 5z9Ubls1YKvXezJs/tUgd6w9EkAqTLPWPR29F6fC8dQpYMSlRZAdNYo7tNwyvHGPGNd9X7Mu1rpam LgL3lyKagLr5ndrI0cqJbGMF0znWeNaTKO82U88Pf7vid2X+EAKXCXJNwICGs2J8kyFYKuzlAHi65 6XcQNUalhMn4Mk5VaOYUHZk5MWHlBUQ4DInmtNjOUpmtvgy1dux/EEswp+qzshKpuBrYijwg93GF2 QyosuHuVELZGJkLsKG0ATh8z7urb3DkHrxGpqdZmd3tgQXJVmLKD061HZpKNy5UT/sEOQzfP/Npmy myrh9WCA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjw6N-0089i2-V3; Mon, 08 Nov 2021 04:18:51 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 05/28] block: Add bio_add_folio() Date: Mon, 8 Nov 2021 04:05:28 +0000 Message-Id: <20211108040551.1942823-6-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This is a thin wrapper around bio_add_page(). The main advantage here is the documentation that folios larger than 2GiB are not supported. It's not currently possible to allocate folios that large, but if it ever becomes possible, this function will fail gracefully instead of doing I/O to the wrong bytes. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jens Axboe Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- block/bio.c | 22 ++++++++++++++++++++++ include/linux/bio.h | 3 ++- 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/block/bio.c b/block/bio.c index 15ab0d6d1c06..4b3087e20d51 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1033,6 +1033,28 @@ int bio_add_page(struct bio *bio, struct page *page, } EXPORT_SYMBOL(bio_add_page); +/** + * bio_add_folio - Attempt to add part of a folio to a bio. + * @bio: BIO to add to. + * @folio: Folio to add. + * @len: How many bytes from the folio to add. + * @off: First byte in this folio to add. + * + * Filesystems that use folios can call this function instead of calling + * bio_add_page() for each page in the folio. If @off is bigger than + * PAGE_SIZE, this function can create a bio_vec that starts in a page + * after the bv_page. BIOs do not support folios that are 4GiB or larger. + * + * Return: Whether the addition was successful. + */ +bool bio_add_folio(struct bio *bio, struct folio *folio, size_t len, + size_t off) +{ + if (len > UINT_MAX || off > UINT_MAX) + return 0; + return bio_add_page(bio, &folio->page, len, off) > 0; +} + void __bio_release_pages(struct bio *bio, bool mark_dirty) { struct bvec_iter_all iter_all; diff --git a/include/linux/bio.h b/include/linux/bio.h index fe6bdfbbef66..a783cac49978 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -409,7 +409,8 @@ extern void bio_uninit(struct bio *); extern void bio_reset(struct bio *); void bio_chain(struct bio *, struct bio *); -extern int bio_add_page(struct bio *, struct page *, unsigned int,unsigned int); +int bio_add_page(struct bio *, struct page *, unsigned len, unsigned off); +bool bio_add_folio(struct bio *, struct folio *, size_t len, size_t off); extern int bio_add_pc_page(struct request_queue *, struct bio *, struct page *, unsigned int, unsigned int); int bio_add_zone_append_page(struct bio *bio, struct page *page, From patchwork Mon Nov 8 04:05:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C857C433EF for ; Mon, 8 Nov 2021 04:26:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3B87961381 for ; Mon, 8 Nov 2021 04:26:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237434AbhKHE3C (ORCPT ); Sun, 7 Nov 2021 23:29:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234867AbhKHE3B (ORCPT ); Sun, 7 Nov 2021 23:29:01 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72249C061570; Sun, 7 Nov 2021 20:26:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WY6kzfOOoEWe6Ipqka3nGQ4VS7/xg01EduApj6vUHxI=; b=ei0+4reeZ5Q3XiJhdhd8KLWA5W CGPXjj5gFIWx/sAV2I/PwR+POTMJujRf43MVlDApbSdVbxt6PbVoKUwTlXeWVJKLA84GClGgB2NdQ 1408pVCQYetXbLxgJyWKyigKOP9hCpxhmUPjrKv3s4qpwVpTV2z34TN+SGBPOTcuXq6p6B0RUeunt kMBsjltM/hEylhEJI0cL/E5Qbt9mOxp4fJ9w55TBUbBj4J7Pw3TtZe2P+yiC9d7XxMXfVm/bLv4WS gWL5cI13m2PuCRaDBzIs243n/gyw781EIvbsZ5JYSR+ui+maAItcByciDM/DRqT+obiFH6kMACpI1 86PcxAjA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjw9L-0089oU-5Q; Mon, 08 Nov 2021 04:21:33 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 06/28] block: Add bio_for_each_folio_all() Date: Mon, 8 Nov 2021 04:05:29 +0000 Message-Id: <20211108040551.1942823-7-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Allow callers to iterate over each folio instead of each page. The bio need not have been constructed using folios originally. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jens Axboe Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- Documentation/core-api/kernel-api.rst | 1 + include/linux/bio.h | 53 ++++++++++++++++++++++++++- 2 files changed, 53 insertions(+), 1 deletion(-) diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst index 2e7186805148..7f0cb604b6ab 100644 --- a/Documentation/core-api/kernel-api.rst +++ b/Documentation/core-api/kernel-api.rst @@ -279,6 +279,7 @@ Accounting Framework Block Devices ============= +.. kernel-doc:: include/linux/bio.h .. kernel-doc:: block/blk-core.c :export: diff --git a/include/linux/bio.h b/include/linux/bio.h index a783cac49978..e3c9e8207f12 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -166,7 +166,7 @@ static inline void bio_advance(struct bio *bio, unsigned int nbytes) */ #define bio_for_each_bvec_all(bvl, bio, i) \ for (i = 0, bvl = bio_first_bvec_all(bio); \ - i < (bio)->bi_vcnt; i++, bvl++) \ + i < (bio)->bi_vcnt; i++, bvl++) #define bio_iter_last(bvec, iter) ((iter).bi_size == (bvec).bv_len) @@ -260,6 +260,57 @@ static inline struct bio_vec *bio_last_bvec_all(struct bio *bio) return &bio->bi_io_vec[bio->bi_vcnt - 1]; } +/** + * struct folio_iter - State for iterating all folios in a bio. + * @folio: The current folio we're iterating. NULL after the last folio. + * @offset: The byte offset within the current folio. + * @length: The number of bytes in this iteration (will not cross folio + * boundary). + */ +struct folio_iter { + struct folio *folio; + size_t offset; + size_t length; + /* private: for use by the iterator */ + size_t _seg_count; + int _i; +}; + +static inline void bio_first_folio(struct folio_iter *fi, struct bio *bio, + int i) +{ + struct bio_vec *bvec = bio_first_bvec_all(bio) + i; + + fi->folio = page_folio(bvec->bv_page); + fi->offset = bvec->bv_offset + + PAGE_SIZE * (bvec->bv_page - &fi->folio->page); + fi->_seg_count = bvec->bv_len; + fi->length = min(folio_size(fi->folio) - fi->offset, fi->_seg_count); + fi->_i = i; +} + +static inline void bio_next_folio(struct folio_iter *fi, struct bio *bio) +{ + fi->_seg_count -= fi->length; + if (fi->_seg_count) { + fi->folio = folio_next(fi->folio); + fi->offset = 0; + fi->length = min(folio_size(fi->folio), fi->_seg_count); + } else if (fi->_i + 1 < bio->bi_vcnt) { + bio_first_folio(fi, bio, fi->_i + 1); + } else { + fi->folio = NULL; + } +} + +/** + * bio_for_each_folio_all - Iterate over each folio in a bio. + * @fi: struct folio_iter which is updated for each folio. + * @bio: struct bio to iterate over. + */ +#define bio_for_each_folio_all(fi, bio) \ + for (bio_first_folio(&fi, bio, 0); fi.folio; bio_next_folio(&fi, bio)) + enum bip_flags { BIP_BLOCK_INTEGRITY = 1 << 0, /* block layer owns integrity data */ BIP_MAPPED_INTEGRITY = 1 << 1, /* ref tag has been remapped */ From patchwork Mon Nov 8 04:05:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC07DC4332F for ; Mon, 8 Nov 2021 04:28:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ACBBB6125F for ; Mon, 8 Nov 2021 04:28:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237447AbhKHEbe (ORCPT ); Sun, 7 Nov 2021 23:31:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231463AbhKHEbd (ORCPT ); Sun, 7 Nov 2021 23:31:33 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B78EC061570; Sun, 7 Nov 2021 20:28:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FFUWVznVjkFny3CeBnySJ30Sr87Ms+/L5ZarsHXVCDs=; b=ZnAHgapdFTx7RwBoh6tHHaw3+z zXPtT1PxxtQ/XivY5COItC0EccD3TFQuOguWfniKFWdN+QfUzAp70yHZPaUkLLuAhqbP5h5z7MHCw j9F5mSElVGmEXlAjAIDsH0QKp5QnS9cehGc2sEuPGa5jGaH9H9oKaXRvF2uRUZSUewEEJK9D46KCK F9GOOBmCbyLl+KI6hRUG6DjlGmJi5ws/pV9tzS+Dw98nJrSMWfeW7TAmV29TBy/+0gasaOHDLTq20 SEmKkilsZIL0AGfUM+uzuX7grQiNg96PeaYnINFsxsiQd93uX1cLSRsiivcwE+Mz2eAlz6fHtBzra oF01EB/Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwC3-0089rj-TH; Mon, 08 Nov 2021 04:24:25 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig Subject: [PATCH v2 07/28] fs/buffer: Convert __block_write_begin_int() to take a folio Date: Mon, 8 Nov 2021 04:05:30 +0000 Message-Id: <20211108040551.1942823-8-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org There are no plans to convert buffer_head infrastructure to use multi-page folios, but __block_write_begin_int() is called from iomap, and it's more convenient and less error-prone if we pass in a folio from iomap. It also has a nice saving of almost 200 bytes of code from removing repeated calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/buffer.c | 22 +++++++++++----------- fs/internal.h | 2 +- fs/iomap/buffered-io.c | 7 +++++-- 3 files changed, 17 insertions(+), 14 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 46bc589b7a03..b1d722b26fe9 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -1969,34 +1969,34 @@ iomap_to_bh(struct inode *inode, sector_t block, struct buffer_head *bh, } } -int __block_write_begin_int(struct page *page, loff_t pos, unsigned len, +int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len, get_block_t *get_block, const struct iomap *iomap) { unsigned from = pos & (PAGE_SIZE - 1); unsigned to = from + len; - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; unsigned block_start, block_end; sector_t block; int err = 0; unsigned blocksize, bbits; struct buffer_head *bh, *head, *wait[2], **wait_bh=wait; - BUG_ON(!PageLocked(page)); + BUG_ON(!folio_test_locked(folio)); BUG_ON(from > PAGE_SIZE); BUG_ON(to > PAGE_SIZE); BUG_ON(from > to); - head = create_page_buffers(page, inode, 0); + head = create_page_buffers(&folio->page, inode, 0); blocksize = head->b_size; bbits = block_size_bits(blocksize); - block = (sector_t)page->index << (PAGE_SHIFT - bbits); + block = (sector_t)folio->index << (PAGE_SHIFT - bbits); for(bh = head, block_start = 0; bh != head || !block_start; block++, block_start=block_end, bh = bh->b_this_page) { block_end = block_start + blocksize; if (block_end <= from || block_start >= to) { - if (PageUptodate(page)) { + if (folio_test_uptodate(folio)) { if (!buffer_uptodate(bh)) set_buffer_uptodate(bh); } @@ -2016,20 +2016,20 @@ int __block_write_begin_int(struct page *page, loff_t pos, unsigned len, if (buffer_new(bh)) { clean_bdev_bh_alias(bh); - if (PageUptodate(page)) { + if (folio_test_uptodate(folio)) { clear_buffer_new(bh); set_buffer_uptodate(bh); mark_buffer_dirty(bh); continue; } if (block_end > to || block_start < from) - zero_user_segments(page, + folio_zero_segments(folio, to, block_end, block_start, from); continue; } } - if (PageUptodate(page)) { + if (folio_test_uptodate(folio)) { if (!buffer_uptodate(bh)) set_buffer_uptodate(bh); continue; @@ -2050,14 +2050,14 @@ int __block_write_begin_int(struct page *page, loff_t pos, unsigned len, err = -EIO; } if (unlikely(err)) - page_zero_new_buffers(page, from, to); + page_zero_new_buffers(&folio->page, from, to); return err; } int __block_write_begin(struct page *page, loff_t pos, unsigned len, get_block_t *get_block) { - return __block_write_begin_int(page, pos, len, get_block, NULL); + return __block_write_begin_int(page_folio(page), pos, len, get_block, NULL); } EXPORT_SYMBOL(__block_write_begin); diff --git a/fs/internal.h b/fs/internal.h index cdd83d4899bb..afc13443392b 100644 --- a/fs/internal.h +++ b/fs/internal.h @@ -37,7 +37,7 @@ static inline int emergency_thaw_bdev(struct super_block *sb) /* * buffer.c */ -int __block_write_begin_int(struct page *page, loff_t pos, unsigned len, +int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len, get_block_t *get_block, const struct iomap *iomap); /* diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 1753c26c8e76..4e09ea823148 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -597,6 +597,7 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, const struct iomap_page_ops *page_ops = iter->iomap.page_ops; const struct iomap *srcmap = iomap_iter_srcmap(iter); struct page *page; + struct folio *folio; int status = 0; BUG_ON(pos + len > iter->iomap.offset + iter->iomap.length); @@ -618,11 +619,12 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, status = -ENOMEM; goto out_no_page; } + folio = page_folio(page); if (srcmap->type == IOMAP_INLINE) status = iomap_write_begin_inline(iter, page); else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) - status = __block_write_begin_int(page, pos, len, NULL, srcmap); + status = __block_write_begin_int(folio, pos, len, NULL, srcmap); else status = __iomap_write_begin(iter, pos, len, page); @@ -954,11 +956,12 @@ EXPORT_SYMBOL_GPL(iomap_truncate_page); static loff_t iomap_page_mkwrite_iter(struct iomap_iter *iter, struct page *page) { + struct folio *folio = page_folio(page); loff_t length = iomap_length(iter); int ret; if (iter->iomap.flags & IOMAP_F_BUFFER_HEAD) { - ret = __block_write_begin_int(page, iter->pos, length, NULL, + ret = __block_write_begin_int(folio, iter->pos, length, NULL, &iter->iomap); if (ret) return ret; From patchwork Mon Nov 8 04:05:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607715 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 353CAC433F5 for ; Mon, 8 Nov 2021 04:30:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1AEA3613A6 for ; Mon, 8 Nov 2021 04:30:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237456AbhKHEdG (ORCPT ); Sun, 7 Nov 2021 23:33:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231463AbhKHEdF (ORCPT ); Sun, 7 Nov 2021 23:33:05 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0497DC061570; Sun, 7 Nov 2021 20:30:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=vOKnloFwyILMUGc+FrSHFMnIPkUr1ZUZ+n3e/FqYtOo=; b=d5z4YzmzsXCT/4NnmIscKUefxI 943tpjLInSam89eqPVdbcEqKd0sWaNIDENIu6dK9pEOcWBm/oxbZcxXXG3pgalms681dWBaS/htlf eWJH5kG5f33HbKg/9v2kll3Dig52QvTBPzCEuSD9WBhgN/VyteHsrbMM1gdHypicPAlgMegqaVEY7 PMLvamWSvjUsAZJwInmY+4JJ1PFsxf6N45VhNrdLcsRi5aj8rYoaQDvnEI2Ny3oz/RSbknGOvP676 hAFpIERpMcXTHRi5fnKLrtUYM3C6SA6p+zwvVYNwkLcOZUyL8O8HrJA56NthtOkwI7DYSZvSFBDbx zHinNoNg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwEU-0089vQ-Qs; Mon, 08 Nov 2021 04:27:09 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 08/28] iomap: Convert to_iomap_page to take a folio Date: Mon, 8 Nov 2021 04:05:31 +0000 Message-Id: <20211108040551.1942823-9-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The big comment about only using a head page can go away now that it takes a folio argument. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 32 +++++++++++++++----------------- 1 file changed, 15 insertions(+), 17 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 4e09ea823148..236beeeaef42 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -22,8 +22,8 @@ #include "../internal.h" /* - * Structure allocated for each page or THP when block size < page size - * to track sub-page uptodate status and I/O completions. + * Structure allocated for each folio when block size < folio size + * to track sub-folio uptodate status and I/O completions. */ struct iomap_page { atomic_t read_bytes_pending; @@ -32,17 +32,10 @@ struct iomap_page { unsigned long uptodate[]; }; -static inline struct iomap_page *to_iomap_page(struct page *page) +static inline struct iomap_page *to_iomap_page(struct folio *folio) { - /* - * per-block data is stored in the head page. Callers should - * not be dealing with tail pages, and if they are, they can - * call thp_head() first. - */ - VM_BUG_ON_PGFLAGS(PageTail(page), page); - - if (page_has_private(page)) - return (struct iomap_page *)page_private(page); + if (folio_test_private(folio)) + return folio_get_private(folio); return NULL; } @@ -51,7 +44,8 @@ static struct bio_set iomap_ioend_bioset; static struct iomap_page * iomap_page_create(struct inode *inode, struct page *page) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); unsigned int nr_blocks = i_blocks_per_page(inode, page); if (iop || nr_blocks <= 1) @@ -144,7 +138,8 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, static void iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); struct inode *inode = page->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; @@ -173,7 +168,8 @@ static void iomap_read_page_end_io(struct bio_vec *bvec, int error) { struct page *page = bvec->bv_page; - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); if (unlikely(error)) { ClearPageUptodate(page); @@ -427,7 +423,8 @@ int iomap_is_partially_uptodate(struct page *page, unsigned long from, unsigned long count) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); struct inode *inode = page->mapping->host; unsigned len, first, last; unsigned i; @@ -1006,7 +1003,8 @@ static void iomap_finish_page_writeback(struct inode *inode, struct page *page, int error, unsigned int len) { - struct iomap_page *iop = to_iomap_page(page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); if (error) { SetPageError(page); From patchwork Mon Nov 8 04:05:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607717 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40904C433FE for ; Mon, 8 Nov 2021 04:32:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1C15361381 for ; Mon, 8 Nov 2021 04:32:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234472AbhKHEfI (ORCPT ); Sun, 7 Nov 2021 23:35:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231463AbhKHEfH (ORCPT ); Sun, 7 Nov 2021 23:35:07 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C5FEC061570; Sun, 7 Nov 2021 20:32:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RSWa6K0pUAnHxgFaPZenOcWe2pyN0Haz/Bc0OfYIlB0=; b=uCfwPUlzhfTU/iXLoZUGvNx9D+ SxGdFls4BJLDRdCdZjr25+ugez6vrsos6bQ5APzH2w3Gu8K/mkggVHVpqE/pm+F94MLqufMVCUdiV +Ozu8y85oV0kgKE/JuYrBboAohXGv0WFb9ZAyTRlFO2JFgJCuMm5lvX1PvPQNrXdw/DGlyvNNRXRX MfF5xqDbsPY+UoyDTham/rhUzbU/suYPTtaQs4OIQLWfBECYWe7lqRjaXSRJbeFyojhNSJHm7toZq fsexzEYPz5BbuSzhd17MFiNHlMrdCrIiAx6kE4+/BIMC7xLOAmoCshiFkC0jGSpDYblJiFpKLVTnY 2TY62HYA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwH2-0089yn-4m; Mon, 08 Nov 2021 04:29:22 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 09/28] iomap: Convert iomap_page_create to take a folio Date: Mon, 8 Nov 2021 04:05:32 +0000 Message-Id: <20211108040551.1942823-10-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This function already assumed it was being passed a head page, so just formalise that. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 236beeeaef42..6972ac8fda77 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -42,11 +42,10 @@ static inline struct iomap_page *to_iomap_page(struct folio *folio) static struct bio_set iomap_ioend_bioset; static struct iomap_page * -iomap_page_create(struct inode *inode, struct page *page) +iomap_page_create(struct inode *inode, struct folio *folio) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); - unsigned int nr_blocks = i_blocks_per_page(inode, page); + unsigned int nr_blocks = i_blocks_per_folio(inode, folio); if (iop || nr_blocks <= 1) return iop; @@ -54,9 +53,9 @@ iomap_page_create(struct inode *inode, struct page *page) iop = kzalloc(struct_size(iop, uptodate, BITS_TO_LONGS(nr_blocks)), GFP_NOFS | __GFP_NOFAIL); spin_lock_init(&iop->uptodate_lock); - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) bitmap_fill(iop->uptodate, nr_blocks); - attach_page_private(page, iop); + folio_attach_private(folio, iop); return iop; } @@ -204,6 +203,7 @@ struct iomap_readpage_ctx { static loff_t iomap_read_inline_data(const struct iomap_iter *iter, struct page *page) { + struct folio *folio = page_folio(page); const struct iomap *iomap = iomap_iter_srcmap(iter); size_t size = i_size_read(iter->inode) - iomap->offset; size_t poff = offset_in_page(iomap->offset); @@ -220,7 +220,7 @@ static loff_t iomap_read_inline_data(const struct iomap_iter *iter, if (WARN_ON_ONCE(size > iomap->length)) return -EIO; if (poff > 0) - iomap_page_create(iter->inode, page); + iomap_page_create(iter->inode, folio); addr = kmap_local_page(page) + poff; memcpy(addr, iomap->inline_data, size); @@ -247,6 +247,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, loff_t pos = iter->pos + offset; loff_t length = iomap_length(iter) - offset; struct page *page = ctx->cur_page; + struct folio *folio = page_folio(page); struct iomap_page *iop; loff_t orig_pos = pos; unsigned poff, plen; @@ -256,7 +257,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, return min(iomap_read_inline_data(iter, page), length); /* zero post-eof blocks as the page may be mapped */ - iop = iomap_page_create(iter->inode, page); + iop = iomap_page_create(iter->inode, folio); iomap_adjust_read_range(iter->inode, iop, &pos, length, &poff, &plen); if (plen == 0) goto done; @@ -536,8 +537,9 @@ iomap_read_page_sync(loff_t block_start, struct page *page, unsigned poff, static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, unsigned len, struct page *page) { + struct folio *folio = page_folio(page); const struct iomap *srcmap = iomap_iter_srcmap(iter); - struct iomap_page *iop = iomap_page_create(iter->inode, page); + struct iomap_page *iop = iomap_page_create(iter->inode, folio); loff_t block_size = i_blocksize(iter->inode); loff_t block_start = round_down(pos, block_size); loff_t block_end = round_up(pos + len, block_size); @@ -1290,7 +1292,8 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, struct page *page, u64 end_offset) { - struct iomap_page *iop = iomap_page_create(inode, page); + struct folio *folio = page_folio(page); + struct iomap_page *iop = iomap_page_create(inode, folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); u64 file_offset; /* file offset of page */ From patchwork Mon Nov 8 04:05:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2A7EC4332F for ; Mon, 8 Nov 2021 04:36:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A6F8C61503 for ; Mon, 8 Nov 2021 04:36:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237467AbhKHEin (ORCPT ); Sun, 7 Nov 2021 23:38:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231463AbhKHEin (ORCPT ); Sun, 7 Nov 2021 23:38:43 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5091C061570; Sun, 7 Nov 2021 20:35:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Sk7LpdtcT/zMPW2QS9Dc0ppoW5yOJNSyoam+tZwg+L4=; b=tvMRYOss/mB+dwvifGAA86KaJF mSUk4yif3ihSTCsX4tPGj882QxRNGFFErvlykgWV8E8koHubMO/hrft0Na56u8BK3AWfr/25swQCV VfjrCe82CqZ3Gt8gKzILjP5IovKhFE3ce2cOzF6ip0xXjTZG6jR+fNXtOoboPmXvPv+HQ2/DCnvK6 r6CWvfLGh9wd5deYCDZ9KiZ18zGetx6peT+att+usJsoIA/9Et1RQDqLy3Qh6aZRV6CcWTnIlfGdg T1FovOfXup+wmW8AMUDQsvEnHsiZIOBa/WiynDR5HG4ICGTSoXgZ+uuGFyZLlx1ZRHfPMVLyw2a0x R2YHfHIQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwJ3-008A21-Id; Mon, 08 Nov 2021 04:31:50 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 10/28] iomap: Convert iomap_page_release to take a folio Date: Mon, 8 Nov 2021 04:05:33 +0000 Message-Id: <20211108040551.1942823-11-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org iomap_page_release() was also assuming that it was being passed a head page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 6972ac8fda77..ad3a16861ddc 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -59,18 +59,18 @@ iomap_page_create(struct inode *inode, struct folio *folio) return iop; } -static void -iomap_page_release(struct page *page) +static void iomap_page_release(struct folio *folio) { - struct iomap_page *iop = detach_page_private(page); - unsigned int nr_blocks = i_blocks_per_page(page->mapping->host, page); + struct iomap_page *iop = folio_detach_private(folio); + struct inode *inode = folio->mapping->host; + unsigned int nr_blocks = i_blocks_per_folio(inode, folio); if (!iop) return; WARN_ON_ONCE(atomic_read(&iop->read_bytes_pending)); WARN_ON_ONCE(atomic_read(&iop->write_bytes_pending)); WARN_ON_ONCE(bitmap_full(iop->uptodate, nr_blocks) != - PageUptodate(page)); + folio_test_uptodate(folio)); kfree(iop); } @@ -451,6 +451,8 @@ EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate); int iomap_releasepage(struct page *page, gfp_t gfp_mask) { + struct folio *folio = page_folio(page); + trace_iomap_releasepage(page->mapping->host, page_offset(page), PAGE_SIZE); @@ -461,7 +463,7 @@ iomap_releasepage(struct page *page, gfp_t gfp_mask) */ if (PageDirty(page) || PageWriteback(page)) return 0; - iomap_page_release(page); + iomap_page_release(folio); return 1; } EXPORT_SYMBOL_GPL(iomap_releasepage); @@ -469,6 +471,8 @@ EXPORT_SYMBOL_GPL(iomap_releasepage); void iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) { + struct folio *folio = page_folio(page); + trace_iomap_invalidatepage(page->mapping->host, offset, len); /* @@ -478,7 +482,7 @@ iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) if (offset == 0 && len == PAGE_SIZE) { WARN_ON_ONCE(PageWriteback(page)); cancel_dirty_page(page); - iomap_page_release(page); + iomap_page_release(folio); } } EXPORT_SYMBOL_GPL(iomap_invalidatepage); From patchwork Mon Nov 8 04:05:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C8B7C433FE for ; Mon, 8 Nov 2021 04:37:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E8C4610A5 for ; Mon, 8 Nov 2021 04:37:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237477AbhKHEk1 (ORCPT ); Sun, 7 Nov 2021 23:40:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236656AbhKHEk1 (ORCPT ); Sun, 7 Nov 2021 23:40:27 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B03D8C061570; Sun, 7 Nov 2021 20:37:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yJsoFVUmtD2/zI3FaeMvhqZ6x8CIdtNnY36zhhhPsMM=; b=EbGHHP2ucSrSWNYVqKRitFSc0l fRXlNQcQjTc7GehOPiljJvsB4nVCqW7WE3bhPbU+0T+mLE3bLvkQ0077YwDm0VdOg0Cs28ZaVe7+P xjblJ2veyZ08JkNTBN3Ad8T5fNQWmQ4EWjh1qjKsdQvDylcqA001gd2vPGbDriNdtdhD6s4pCHUux vIBSuWJ9zvjyCGzS9x/trfgaVRb/KIUZ9Y3jaghcd6qdkaUAvdgDZeO8u+orkKkbGaJS4PEW2Bxw0 TWLkAuYwWErYS21K1JS4suBS+RJbR9mld5T5zfrghdzPsa/AjZJNsfaOecyN4SsWMNQUKXume6iD7 pyZOmewA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwLC-008A6j-7A; Mon, 08 Nov 2021 04:34:03 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 11/28] iomap: Convert iomap_releasepage to use a folio Date: Mon, 8 Nov 2021 04:05:34 +0000 Message-Id: <20211108040551.1942823-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This is an address_space operation, so its argument must remain as a struct page, but we can use a folio internally. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index ad3a16861ddc..49f96fdadcb4 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -453,15 +453,15 @@ iomap_releasepage(struct page *page, gfp_t gfp_mask) { struct folio *folio = page_folio(page); - trace_iomap_releasepage(page->mapping->host, page_offset(page), - PAGE_SIZE); + trace_iomap_releasepage(folio->mapping->host, folio_pos(folio), + folio_size(folio)); /* * mm accommodates an old ext3 case where clean pages might not have had * the dirty bit cleared. Thus, it can send actual dirty pages to * ->releasepage() via shrink_active_list(); skip those here. */ - if (PageDirty(page) || PageWriteback(page)) + if (folio_test_dirty(folio) || folio_test_writeback(folio)) return 0; iomap_page_release(folio); return 1; From patchwork Mon Nov 8 04:05:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58F4FC433FE for ; Mon, 8 Nov 2021 04:39:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 34DD26125F for ; Mon, 8 Nov 2021 04:39:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235202AbhKHEmi (ORCPT ); Sun, 7 Nov 2021 23:42:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230412AbhKHEmi (ORCPT ); Sun, 7 Nov 2021 23:42:38 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77D11C061570; Sun, 7 Nov 2021 20:39:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=q1Ai1X6+Fnek7vW+SwIcp2mbpI4t9YxBCtqSnvMICLQ=; b=npS9cgzypBr8raxng12uWUCYz3 AuuISL7OEB0doavXlhGMBc6jHQd3osjEV5JyDG+uP9AbwfjmRf15nUJY2ogiI5boe0D9kgIRYQ3en yOQiqjZF5V0g2BdN8QD5pOx8/8kSnLrGcDkyGsEaKK6DPHuf4gm4tTwaOs6/mAo7ih3PKV7nKnJUM ZhtedQ2XtVrucgomsHC6cDUAE28fHqWJoBhYVLm3rxk0DqgkjX9+zrgHOQg/1Tti3zTFI5IekyvzV mcvMsrTl2hJuHiI8LkZLd41L9z9KkEGUw38gHV0+exc6d6Rul6GHR4zj46+WyB3XTvTIO0Y7gXsMG G+f4LC9Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwNy-008A9y-8g; Mon, 08 Nov 2021 04:36:25 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 12/28] iomap: Add iomap_invalidate_folio Date: Mon, 8 Nov 2021 04:05:35 +0000 Message-Id: <20211108040551.1942823-13-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Keep iomap_invalidatepage around as a wrapper for use in address_space operations. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 20 ++++++++++++-------- include/linux/iomap.h | 1 + 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 49f96fdadcb4..b7cbe4d202d8 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -468,23 +468,27 @@ iomap_releasepage(struct page *page, gfp_t gfp_mask) } EXPORT_SYMBOL_GPL(iomap_releasepage); -void -iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) +void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len) { - struct folio *folio = page_folio(page); - - trace_iomap_invalidatepage(page->mapping->host, offset, len); + trace_iomap_invalidatepage(folio->mapping->host, offset, len); /* * If we're invalidating the entire page, clear the dirty state from it * and release it to avoid unnecessary buildup of the LRU. */ - if (offset == 0 && len == PAGE_SIZE) { - WARN_ON_ONCE(PageWriteback(page)); - cancel_dirty_page(page); + if (offset == 0 && len == folio_size(folio)) { + WARN_ON_ONCE(folio_test_writeback(folio)); + folio_cancel_dirty(folio); iomap_page_release(folio); } } +EXPORT_SYMBOL_GPL(iomap_invalidate_folio); + +void iomap_invalidatepage(struct page *page, unsigned int offset, + unsigned int len) +{ + iomap_invalidate_folio(page_folio(page), offset, len); +} EXPORT_SYMBOL_GPL(iomap_invalidatepage); #ifdef CONFIG_MIGRATION diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 6d1b08d0ae93..29491fb9c5ba 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -225,6 +225,7 @@ void iomap_readahead(struct readahead_control *, const struct iomap_ops *ops); int iomap_is_partially_uptodate(struct page *page, unsigned long from, unsigned long count); int iomap_releasepage(struct page *page, gfp_t gfp_mask); +void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len); void iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len); #ifdef CONFIG_MIGRATION From patchwork Mon Nov 8 04:05:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4409C433FE for ; Mon, 8 Nov 2021 04:42:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A55176128B for ; Mon, 8 Nov 2021 04:42:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235845AbhKHEo5 (ORCPT ); Sun, 7 Nov 2021 23:44:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230412AbhKHEo4 (ORCPT ); Sun, 7 Nov 2021 23:44:56 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16591C061570; Sun, 7 Nov 2021 20:42:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=//Ubyo9RivO+L2NtJuXWh8U+BYpob8Dchx+m+vBHk6o=; b=pW/C3MphEgHIS1L4DVrqMLWu+y SHlnJt7IuFtXTS0iRJym7e1FbF+sXAg7FGm0QoW6oXfbUs4u/KUQBqjUvI+o7S7N399OIC+/90xWs XdwtpAvfEfg6BS3LpT8sZOAVEOrvJ1f8dHg9OmKdamS5ZXYMY8tFGTn9+bGsG1wsGMCvHFE7PqVpJ cxwE3Zyod5/k6IEXDxvzs8NQzPGtJr75vnLXNho8RuLZ48CyAD4iS2sY4kUXmPE40aSxLxKZm698j OaaKHfEeqpOuubtYK2prLVqAW/K7nXoFW07KbBzx+TAmV9H+AphJyAQMIVsz31Hw28FHHv8kCj57m ebCOiECQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwPe-008ABy-8Y; Mon, 08 Nov 2021 04:38:08 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 13/28] iomap: Pass the iomap_page into iomap_set_range_uptodate Date: Mon, 8 Nov 2021 04:05:36 +0000 Message-Id: <20211108040551.1942823-14-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org All but one caller already has the iomap_page, so we can avoid getting it again. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index b7cbe4d202d8..03bfbafec3f4 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -134,11 +134,9 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, *lenp = plen; } -static void -iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) +static void iomap_iop_set_range_uptodate(struct page *page, + struct iomap_page *iop, unsigned off, unsigned len) { - struct folio *folio = page_folio(page); - struct iomap_page *iop = to_iomap_page(folio); struct inode *inode = page->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; @@ -151,14 +149,14 @@ iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) spin_unlock_irqrestore(&iop->uptodate_lock, flags); } -static void -iomap_set_range_uptodate(struct page *page, unsigned off, unsigned len) +static void iomap_set_range_uptodate(struct page *page, + struct iomap_page *iop, unsigned off, unsigned len) { if (PageError(page)) return; - if (page_has_private(page)) - iomap_iop_set_range_uptodate(page, off, len); + if (iop) + iomap_iop_set_range_uptodate(page, iop, off, len); else SetPageUptodate(page); } @@ -174,7 +172,8 @@ iomap_read_page_end_io(struct bio_vec *bvec, int error) ClearPageUptodate(page); SetPageError(page); } else { - iomap_set_range_uptodate(page, bvec->bv_offset, bvec->bv_len); + iomap_set_range_uptodate(page, iop, bvec->bv_offset, + bvec->bv_len); } if (!iop || atomic_sub_and_test(bvec->bv_len, &iop->read_bytes_pending)) @@ -204,6 +203,7 @@ static loff_t iomap_read_inline_data(const struct iomap_iter *iter, struct page *page) { struct folio *folio = page_folio(page); + struct iomap_page *iop; const struct iomap *iomap = iomap_iter_srcmap(iter); size_t size = i_size_read(iter->inode) - iomap->offset; size_t poff = offset_in_page(iomap->offset); @@ -220,13 +220,15 @@ static loff_t iomap_read_inline_data(const struct iomap_iter *iter, if (WARN_ON_ONCE(size > iomap->length)) return -EIO; if (poff > 0) - iomap_page_create(iter->inode, folio); + iop = iomap_page_create(iter->inode, folio); + else + iop = to_iomap_page(folio); addr = kmap_local_page(page) + poff; memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - poff - size); kunmap_local(addr); - iomap_set_range_uptodate(page, poff, PAGE_SIZE - poff); + iomap_set_range_uptodate(page, iop, poff, PAGE_SIZE - poff); return PAGE_SIZE - poff; } @@ -264,7 +266,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, if (iomap_block_needs_zeroing(iter, pos)) { zero_user(page, poff, plen); - iomap_set_range_uptodate(page, poff, plen); + iomap_set_range_uptodate(page, iop, poff, plen); goto done; } @@ -578,7 +580,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, if (status) return status; } - iomap_set_range_uptodate(page, poff, plen); + iomap_set_range_uptodate(page, iop, poff, plen); } while ((block_start += plen) < block_end); return 0; @@ -655,6 +657,8 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, size_t copied, struct page *page) { + struct folio *folio = page_folio(page); + struct iomap_page *iop = to_iomap_page(folio); flush_dcache_page(page); /* @@ -670,7 +674,7 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, */ if (unlikely(copied < len && !PageUptodate(page))) return 0; - iomap_set_range_uptodate(page, offset_in_page(pos), len); + iomap_set_range_uptodate(page, iop, offset_in_page(pos), len); __set_page_dirty_nobuffers(page); return copied; } From patchwork Mon Nov 8 04:05:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFA6FC433FE for ; Mon, 8 Nov 2021 04:44:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B78A861284 for ; Mon, 8 Nov 2021 04:44:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236614AbhKHEra (ORCPT ); Sun, 7 Nov 2021 23:47:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234886AbhKHEr3 (ORCPT ); Sun, 7 Nov 2021 23:47:29 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48AB7C061570; Sun, 7 Nov 2021 20:44:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4aXqXTTQx+7qhSsSyCzAgcJJGCIlZKyfllY3Fl52cQM=; b=BFuNUF0jyzCRW42tdh60e9Fj4g Zk0ZyWjXxXF0rFYqKHuN6W1bhwJsRpzGtlCRlEox5c9Kl7T2l9nWps+D2L49jepP2ICgirLAaWGBa 1dcWNn7oCCuZ9DAKL35lAKEMTSkOqChtfRcPOmaMEGbDvH8jCn143wk6lO/fXhHALTkbrHww4qn4d WBwpPts/E6gR3aBS7mci3laciLMJHJT8jDMSdKeux9lmsqalQLsy62L1tT0I3Zlbsny0UswHO4bFV 8yAI1v6c9w9rXmm0XKxGZprE27ajlBOV6l9fwuxzQywhpD9tWTgKbosgSKrjeAhDzxRFrDcOfiorV mHyYEYpA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwRl-008AE1-0Z; Mon, 08 Nov 2021 04:40:28 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 14/28] iomap: Convert bio completions to use folios Date: Mon, 8 Nov 2021 04:05:37 +0000 Message-Id: <20211108040551.1942823-15-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Use bio_for_each_folio() to iterate over each folio in the bio instead of iterating over each page. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 50 ++++++++++++++++++------------------------ 1 file changed, 21 insertions(+), 29 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 03bfbafec3f4..bbccb031815e 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -161,34 +161,29 @@ static void iomap_set_range_uptodate(struct page *page, SetPageUptodate(page); } -static void -iomap_read_page_end_io(struct bio_vec *bvec, int error) +static void iomap_finish_folio_read(struct folio *folio, size_t offset, + size_t len, int error) { - struct page *page = bvec->bv_page; - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); if (unlikely(error)) { - ClearPageUptodate(page); - SetPageError(page); + folio_clear_uptodate(folio); + folio_set_error(folio); } else { - iomap_set_range_uptodate(page, iop, bvec->bv_offset, - bvec->bv_len); + iomap_set_range_uptodate(&folio->page, iop, offset, len); } - if (!iop || atomic_sub_and_test(bvec->bv_len, &iop->read_bytes_pending)) - unlock_page(page); + if (!iop || atomic_sub_and_test(len, &iop->read_bytes_pending)) + folio_unlock(folio); } -static void -iomap_read_end_io(struct bio *bio) +static void iomap_read_end_io(struct bio *bio) { int error = blk_status_to_errno(bio->bi_status); - struct bio_vec *bvec; - struct bvec_iter_all iter_all; + struct folio_iter fi; - bio_for_each_segment_all(bvec, bio, iter_all) - iomap_read_page_end_io(bvec, error); + bio_for_each_folio_all(fi, bio) + iomap_finish_folio_read(fi.folio, fi.offset, fi.length, error); bio_put(bio); } @@ -1013,23 +1008,21 @@ vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops) } EXPORT_SYMBOL_GPL(iomap_page_mkwrite); -static void -iomap_finish_page_writeback(struct inode *inode, struct page *page, - int error, unsigned int len) +static void iomap_finish_folio_write(struct inode *inode, struct folio *folio, + size_t len, int error) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); if (error) { - SetPageError(page); + folio_set_error(folio); mapping_set_error(inode->i_mapping, error); } - WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop); + WARN_ON_ONCE(i_blocks_per_folio(inode, folio) > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) <= 0); if (!iop || atomic_sub_and_test(len, &iop->write_bytes_pending)) - end_page_writeback(page); + folio_end_writeback(folio); } /* @@ -1048,8 +1041,7 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error) bool quiet = bio_flagged(bio, BIO_QUIET); for (bio = &ioend->io_inline_bio; bio; bio = next) { - struct bio_vec *bv; - struct bvec_iter_all iter_all; + struct folio_iter fi; /* * For the last bio, bi_private points to the ioend, so we @@ -1060,10 +1052,10 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error) else next = bio->bi_private; - /* walk each page on bio, ending page IO on them */ - bio_for_each_segment_all(bv, bio, iter_all) - iomap_finish_page_writeback(inode, bv->bv_page, error, - bv->bv_len); + /* walk all folios in bio, ending page IO on them */ + bio_for_each_folio_all(fi, bio) + iomap_finish_folio_write(inode, fi.folio, fi.length, + error); bio_put(bio); } /* The ioend has been freed by bio_put() */ From patchwork Mon Nov 8 04:05:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBC30C433EF for ; Mon, 8 Nov 2021 04:46:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AAFE36135E for ; Mon, 8 Nov 2021 04:46:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237522AbhKHEtF (ORCPT ); Sun, 7 Nov 2021 23:49:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234886AbhKHEtF (ORCPT ); Sun, 7 Nov 2021 23:49:05 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EDCFC061570; Sun, 7 Nov 2021 20:46:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DVot1vNHwqZRyOoHJTft91VC0APAQNYG5u1luYiIyCU=; b=htpplDLgBSHm1pdtEG2o+HuGdt aOlebW6btE8FjZE/lDCWQcNqCrwepLEDre5hPxzLDUZmPhWLIAP2SaB6l5kjnlHdZCczCPVXGd1UR 8C67cOrGY7IlxUn8xC17w7sj0417axXxp9jBSi8YegSuQoHNJgoQIr6SmxAHk9Rr0ovbTTU4s4Gll K5ik3DMGwUKxpjHK9QCv90MN27tOeAEyTmYV85jIofvXTxSIpXRrgYJXLVgaHfpaRSNiWQsAQTAWe mThT5lBjrbZADQdxmQmWmb8BM/b4OwwMDLHjF2dW8+TbedeEYbUG8dPSUu51UJyxant6/Fy2BCe3F xx6ru6EQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwTz-008AJC-Ho; Mon, 08 Nov 2021 04:42:56 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 15/28] iomap: Use folio offsets instead of page offsets Date: Mon, 8 Nov 2021 04:05:38 +0000 Message-Id: <20211108040551.1942823-16-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Pass a folio around instead of the page, and make sure the offset is relative to the start of the folio instead of the start of a page. Also use size_t for offset & length to make it clear that these are byte counts, and to support >2GB folios in the future. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 78 ++++++++++++++++++++++-------------------- 1 file changed, 40 insertions(+), 38 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index bbccb031815e..c7c4ae735620 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -75,18 +75,18 @@ static void iomap_page_release(struct folio *folio) } /* - * Calculate the range inside the page that we actually need to read. + * Calculate the range inside the folio that we actually need to read. */ -static void -iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, - loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp) +static void iomap_adjust_read_range(struct inode *inode, struct folio *folio, + loff_t *pos, loff_t length, size_t *offp, size_t *lenp) { + struct iomap_page *iop = to_iomap_page(folio); loff_t orig_pos = *pos; loff_t isize = i_size_read(inode); unsigned block_bits = inode->i_blkbits; unsigned block_size = (1 << block_bits); - unsigned poff = offset_in_page(*pos); - unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length); + size_t poff = offset_in_folio(folio, *pos); + size_t plen = min_t(loff_t, folio_size(folio) - poff, length); unsigned first = poff >> block_bits; unsigned last = (poff + plen - 1) >> block_bits; @@ -124,7 +124,7 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, * page cache for blocks that are entirely outside of i_size. */ if (orig_pos <= isize && orig_pos + length > isize) { - unsigned end = offset_in_page(isize - 1) >> block_bits; + unsigned end = offset_in_folio(folio, isize - 1) >> block_bits; if (first <= end && last > end) plen -= (last - end) * block_size; @@ -134,31 +134,31 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, *lenp = plen; } -static void iomap_iop_set_range_uptodate(struct page *page, - struct iomap_page *iop, unsigned off, unsigned len) +static void iomap_iop_set_range_uptodate(struct folio *folio, + struct iomap_page *iop, size_t off, size_t len) { - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; unsigned first = off >> inode->i_blkbits; unsigned last = (off + len - 1) >> inode->i_blkbits; unsigned long flags; spin_lock_irqsave(&iop->uptodate_lock, flags); bitmap_set(iop->uptodate, first, last - first + 1); - if (bitmap_full(iop->uptodate, i_blocks_per_page(inode, page))) - SetPageUptodate(page); + if (bitmap_full(iop->uptodate, i_blocks_per_folio(inode, folio))) + folio_mark_uptodate(folio); spin_unlock_irqrestore(&iop->uptodate_lock, flags); } -static void iomap_set_range_uptodate(struct page *page, - struct iomap_page *iop, unsigned off, unsigned len) +static void iomap_set_range_uptodate(struct folio *folio, + struct iomap_page *iop, size_t off, size_t len) { - if (PageError(page)) + if (folio_test_error(folio)) return; if (iop) - iomap_iop_set_range_uptodate(page, iop, off, len); + iomap_iop_set_range_uptodate(folio, iop, off, len); else - SetPageUptodate(page); + folio_mark_uptodate(folio); } static void iomap_finish_folio_read(struct folio *folio, size_t offset, @@ -170,7 +170,7 @@ static void iomap_finish_folio_read(struct folio *folio, size_t offset, folio_clear_uptodate(folio); folio_set_error(folio); } else { - iomap_set_range_uptodate(&folio->page, iop, offset, len); + iomap_set_range_uptodate(folio, iop, offset, len); } if (!iop || atomic_sub_and_test(len, &iop->read_bytes_pending)) @@ -202,6 +202,7 @@ static loff_t iomap_read_inline_data(const struct iomap_iter *iter, const struct iomap *iomap = iomap_iter_srcmap(iter); size_t size = i_size_read(iter->inode) - iomap->offset; size_t poff = offset_in_page(iomap->offset); + size_t offset = offset_in_folio(folio, iomap->offset); void *addr; if (PageUptodate(page)) @@ -214,7 +215,7 @@ static loff_t iomap_read_inline_data(const struct iomap_iter *iter, return -EIO; if (WARN_ON_ONCE(size > iomap->length)) return -EIO; - if (poff > 0) + if (offset > 0) iop = iomap_page_create(iter->inode, folio); else iop = to_iomap_page(folio); @@ -223,7 +224,7 @@ static loff_t iomap_read_inline_data(const struct iomap_iter *iter, memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - poff - size); kunmap_local(addr); - iomap_set_range_uptodate(page, iop, poff, PAGE_SIZE - poff); + iomap_set_range_uptodate(folio, iop, offset, PAGE_SIZE - poff); return PAGE_SIZE - poff; } @@ -247,7 +248,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, struct folio *folio = page_folio(page); struct iomap_page *iop; loff_t orig_pos = pos; - unsigned poff, plen; + size_t poff, plen; sector_t sector; if (iomap->type == IOMAP_INLINE) @@ -255,13 +256,13 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, /* zero post-eof blocks as the page may be mapped */ iop = iomap_page_create(iter->inode, folio); - iomap_adjust_read_range(iter->inode, iop, &pos, length, &poff, &plen); + iomap_adjust_read_range(iter->inode, folio, &pos, length, &poff, &plen); if (plen == 0) goto done; if (iomap_block_needs_zeroing(iter, pos)) { - zero_user(page, poff, plen); - iomap_set_range_uptodate(page, iop, poff, plen); + folio_zero_range(folio, poff, plen); + iomap_set_range_uptodate(folio, iop, poff, plen); goto done; } @@ -272,7 +273,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, sector = iomap_sector(iomap, pos); if (!ctx->bio || bio_end_sector(ctx->bio) != sector || - bio_add_page(ctx->bio, page, plen, poff) != plen) { + !bio_add_folio(ctx->bio, folio, plen, poff)) { gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL); gfp_t orig_gfp = gfp; unsigned int nr_vecs = DIV_ROUND_UP(length, PAGE_SIZE); @@ -296,8 +297,9 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, ctx->bio->bi_iter.bi_sector = sector; bio_set_dev(ctx->bio, iomap->bdev); ctx->bio->bi_end_io = iomap_read_end_io; - __bio_add_page(ctx->bio, page, plen, poff); + bio_add_folio(ctx->bio, folio, plen, poff); } + done: /* * Move the caller beyond our range so that it keeps making progress. @@ -524,9 +526,8 @@ iomap_write_failed(struct inode *inode, loff_t pos, unsigned len) truncate_pagecache_range(inode, max(pos, i_size), pos + len); } -static int -iomap_read_page_sync(loff_t block_start, struct page *page, unsigned poff, - unsigned plen, const struct iomap *iomap) +static int iomap_read_folio_sync(loff_t block_start, struct folio *folio, + size_t poff, size_t plen, const struct iomap *iomap) { struct bio_vec bvec; struct bio bio; @@ -535,7 +536,7 @@ iomap_read_page_sync(loff_t block_start, struct page *page, unsigned poff, bio.bi_opf = REQ_OP_READ; bio.bi_iter.bi_sector = iomap_sector(iomap, block_start); bio_set_dev(&bio, iomap->bdev); - __bio_add_page(&bio, page, plen, poff); + bio_add_folio(&bio, folio, plen, poff); return submit_bio_wait(&bio); } @@ -548,14 +549,15 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, loff_t block_size = i_blocksize(iter->inode); loff_t block_start = round_down(pos, block_size); loff_t block_end = round_up(pos + len, block_size); - unsigned from = offset_in_page(pos), to = from + len, poff, plen; + size_t from = offset_in_folio(folio, pos), to = from + len; + size_t poff, plen; - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) return 0; - ClearPageError(page); + folio_clear_error(folio); do { - iomap_adjust_read_range(iter->inode, iop, &block_start, + iomap_adjust_read_range(iter->inode, folio, &block_start, block_end - block_start, &poff, &plen); if (plen == 0) break; @@ -568,14 +570,14 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, if (iomap_block_needs_zeroing(iter, block_start)) { if (WARN_ON_ONCE(iter->flags & IOMAP_UNSHARE)) return -EIO; - zero_user_segments(page, poff, from, to, poff + plen); + folio_zero_segments(folio, poff, from, to, poff + plen); } else { - int status = iomap_read_page_sync(block_start, page, + int status = iomap_read_folio_sync(block_start, folio, poff, plen, srcmap); if (status) return status; } - iomap_set_range_uptodate(page, iop, poff, plen); + iomap_set_range_uptodate(folio, iop, poff, plen); } while ((block_start += plen) < block_end); return 0; @@ -669,7 +671,7 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, */ if (unlikely(copied < len && !PageUptodate(page))) return 0; - iomap_set_range_uptodate(page, iop, offset_in_page(pos), len); + iomap_set_range_uptodate(folio, iop, offset_in_folio(folio, pos), len); __set_page_dirty_nobuffers(page); return copied; } From patchwork Mon Nov 8 04:05:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 643C6C433EF for ; Mon, 8 Nov 2021 04:47:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 465BA61352 for ; Mon, 8 Nov 2021 04:47:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235151AbhKHEul (ORCPT ); Sun, 7 Nov 2021 23:50:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234532AbhKHEul (ORCPT ); Sun, 7 Nov 2021 23:50:41 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69ED8C061570; Sun, 7 Nov 2021 20:47:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=i1E2VhSfqAYO+LQ9lJTp/1yeHdGCfVltic9LtAJE+Zs=; b=u5IXplgUZmbKLLs2hjxBqlJdQi 45WZ+JTvwg8SgyQA+X9KDH8X9oVxZojO4i8bsV3OZ2yze2cWRpyfEA185rWQSXvf908aQzutog5Hn j2zNUcVMIA6sRuwEYwSRelB22rTD/lw4e8omGAh+UcEX8LwhvQ/QIwe67c4GjwlwwGx6G3luaVsMH Zll6T4I5N8LXjS2gYDHwzbq0Hbo04mokpz9L1Ebvda86aCKBhZEjwfMpqAsJoz5sCNriYU6KzhkZG eeD0aMTNflIuQGMslxXQbg31xDpvqOx+LwfDxAE11XGwE91KgnM759kPK8XYcLxBWLFDod88NZERW fhgF0TTg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwW5-008ALp-UU; Mon, 08 Nov 2021 04:45:12 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 16/28] iomap: Convert iomap_read_inline_data to take a folio Date: Mon, 8 Nov 2021 04:05:39 +0000 Message-Id: <20211108040551.1942823-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We still only support up to a single page of inline data (at least, per call to iomap_read_inline_data()), but it can now be written into the middle of a folio in case we decide to allocate a 16KiB page for a file that's 8.1KiB in size. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index c7c4ae735620..96a404f11a3b 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -195,9 +195,8 @@ struct iomap_readpage_ctx { }; static loff_t iomap_read_inline_data(const struct iomap_iter *iter, - struct page *page) + struct folio *folio) { - struct folio *folio = page_folio(page); struct iomap_page *iop; const struct iomap *iomap = iomap_iter_srcmap(iter); size_t size = i_size_read(iter->inode) - iomap->offset; @@ -205,7 +204,7 @@ static loff_t iomap_read_inline_data(const struct iomap_iter *iter, size_t offset = offset_in_folio(folio, iomap->offset); void *addr; - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) return PAGE_SIZE - poff; if (WARN_ON_ONCE(size > PAGE_SIZE - poff)) @@ -220,7 +219,7 @@ static loff_t iomap_read_inline_data(const struct iomap_iter *iter, else iop = to_iomap_page(folio); - addr = kmap_local_page(page) + poff; + addr = kmap_local_folio(folio, offset); memcpy(addr, iomap->inline_data, size); memset(addr + size, 0, PAGE_SIZE - poff - size); kunmap_local(addr); @@ -252,7 +251,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, sector_t sector; if (iomap->type == IOMAP_INLINE) - return min(iomap_read_inline_data(iter, page), length); + return min(iomap_read_inline_data(iter, folio), length); /* zero post-eof blocks as the page may be mapped */ iop = iomap_page_create(iter->inode, folio); @@ -586,12 +585,13 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, static int iomap_write_begin_inline(const struct iomap_iter *iter, struct page *page) { + struct folio *folio = page_folio(page); int ret; /* needs more work for the tailpacking case; disable for now */ if (WARN_ON_ONCE(iomap_iter_srcmap(iter)->offset != 0)) return -EIO; - ret = iomap_read_inline_data(iter, page); + ret = iomap_read_inline_data(iter, folio); if (ret < 0) return ret; return 0; From patchwork Mon Nov 8 04:05:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6BC2C4332F for ; Mon, 8 Nov 2021 04:50:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7E4996135E for ; Mon, 8 Nov 2021 04:50:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237592AbhKHExY (ORCPT ); Sun, 7 Nov 2021 23:53:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237563AbhKHExJ (ORCPT ); Sun, 7 Nov 2021 23:53:09 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53FB0C061570; Sun, 7 Nov 2021 20:50:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=MSwGyyjT2OJm0b6d3Edmw5CIN5pGSRTORzn8ma//Evg=; b=b0QJFIpClhnpFSpHXXRIQC80Fh fbupHNSyuqKh1xIoLHBc4OwtUoGnP/FfYlUSzHadvFEeZLWI4CO2GgiV5LlO83K54MB3FD+QnU+RV ugrMYyDm+xKHJMnAk0ZbBSY8doJchhB41gL5BqbIhzKE2ixYi+Ws2cB6Y8NhTPI4mojvFKkqDF1Yb b86zXJbZb0Wn/5QGNvSd7DuTfIt4ikrOFhgISw19COwlpll+BWX9HWTzMqvhwyTakGvIHEDMG71TT iAkyIp1fB7Ie1aKEuqXrUOLZA/hAaneHUA2G9pLv+vX/+GR9ZHijnQVQiOmqRcHztY4LhyslPaeBm g7Kp+NxA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwXz-008AOm-RX; Mon, 08 Nov 2021 04:46:51 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig Subject: [PATCH v2 17/28] iomap: Convert readahead and readpage to use a folio Date: Mon, 8 Nov 2021 04:05:40 +0000 Message-Id: <20211108040551.1942823-18-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Handle folios of arbitrary size instead of working in PAGE_SIZE units. readahead_folio() decreases the page refcount for you, so this is not quite a mechanical change. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 53 +++++++++++++++++++++--------------------- 1 file changed, 26 insertions(+), 27 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 96a404f11a3b..b0b402e1779e 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -188,8 +188,8 @@ static void iomap_read_end_io(struct bio *bio) } struct iomap_readpage_ctx { - struct page *cur_page; - bool cur_page_in_bio; + struct folio *cur_folio; + bool cur_folio_in_bio; struct bio *bio; struct readahead_control *rac; }; @@ -243,8 +243,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, const struct iomap *iomap = &iter->iomap; loff_t pos = iter->pos + offset; loff_t length = iomap_length(iter) - offset; - struct page *page = ctx->cur_page; - struct folio *folio = page_folio(page); + struct folio *folio = ctx->cur_folio; struct iomap_page *iop; loff_t orig_pos = pos; size_t poff, plen; @@ -265,7 +264,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, goto done; } - ctx->cur_page_in_bio = true; + ctx->cur_folio_in_bio = true; if (iop) atomic_add(plen, &iop->read_bytes_pending); @@ -273,7 +272,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, if (!ctx->bio || bio_end_sector(ctx->bio) != sector || !bio_add_folio(ctx->bio, folio, plen, poff)) { - gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL); + gfp_t gfp = mapping_gfp_constraint(folio->mapping, GFP_KERNEL); gfp_t orig_gfp = gfp; unsigned int nr_vecs = DIV_ROUND_UP(length, PAGE_SIZE); @@ -312,30 +311,31 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, int iomap_readpage(struct page *page, const struct iomap_ops *ops) { + struct folio *folio = page_folio(page); struct iomap_iter iter = { - .inode = page->mapping->host, - .pos = page_offset(page), - .len = PAGE_SIZE, + .inode = folio->mapping->host, + .pos = folio_pos(folio), + .len = folio_size(folio), }; struct iomap_readpage_ctx ctx = { - .cur_page = page, + .cur_folio = folio, }; int ret; - trace_iomap_readpage(page->mapping->host, 1); + trace_iomap_readpage(iter.inode, 1); while ((ret = iomap_iter(&iter, ops)) > 0) iter.processed = iomap_readpage_iter(&iter, &ctx, 0); if (ret < 0) - SetPageError(page); + folio_set_error(folio); if (ctx.bio) { submit_bio(ctx.bio); - WARN_ON_ONCE(!ctx.cur_page_in_bio); + WARN_ON_ONCE(!ctx.cur_folio_in_bio); } else { - WARN_ON_ONCE(ctx.cur_page_in_bio); - unlock_page(page); + WARN_ON_ONCE(ctx.cur_folio_in_bio); + folio_unlock(folio); } /* @@ -354,15 +354,15 @@ static loff_t iomap_readahead_iter(const struct iomap_iter *iter, loff_t done, ret; for (done = 0; done < length; done += ret) { - if (ctx->cur_page && offset_in_page(iter->pos + done) == 0) { - if (!ctx->cur_page_in_bio) - unlock_page(ctx->cur_page); - put_page(ctx->cur_page); - ctx->cur_page = NULL; + if (ctx->cur_folio && + offset_in_folio(ctx->cur_folio, iter->pos + done) == 0) { + if (!ctx->cur_folio_in_bio) + folio_unlock(ctx->cur_folio); + ctx->cur_folio = NULL; } - if (!ctx->cur_page) { - ctx->cur_page = readahead_page(ctx->rac); - ctx->cur_page_in_bio = false; + if (!ctx->cur_folio) { + ctx->cur_folio = readahead_folio(ctx->rac); + ctx->cur_folio_in_bio = false; } ret = iomap_readpage_iter(iter, ctx, done); } @@ -403,10 +403,9 @@ void iomap_readahead(struct readahead_control *rac, const struct iomap_ops *ops) if (ctx.bio) submit_bio(ctx.bio); - if (ctx.cur_page) { - if (!ctx.cur_page_in_bio) - unlock_page(ctx.cur_page); - put_page(ctx.cur_page); + if (ctx.cur_folio) { + if (!ctx.cur_folio_in_bio) + folio_unlock(ctx.cur_folio); } } EXPORT_SYMBOL_GPL(iomap_readahead); From patchwork Mon Nov 8 04:05:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82EE5C4332F for ; Mon, 8 Nov 2021 04:51:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 611666135E for ; Mon, 8 Nov 2021 04:51:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237570AbhKHExq (ORCPT ); Sun, 7 Nov 2021 23:53:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237588AbhKHExl (ORCPT ); Sun, 7 Nov 2021 23:53:41 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5F1DC061766; Sun, 7 Nov 2021 20:50:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ABOv/y8+NXP2UR+IqvkpxutjcwPquj4gNrIuYlOU0hA=; b=dYwvRN3+fqkPpLR/io9BmJ/HK3 SSWPMVurgAaiwUidE1ZChcbkoLpNs9gcLYLNkagI8RaqK7exsY4UuMwMP8h9VJjq/U3fZcmC9XZDM 5cMH+AkFN/y1pKVv4veEjRxN4+nvb8HQFB6PUDHNegiyB1IaR0EjVf4W2iC9naTGZIKLqs/qzzLpq /Q7bsJ+PTY7So+foG/oZZr+FNhSzQEhD8nDLUzl7usPcXNXs5x+XH5A+G0rt3Z3oHeFV7KwYMIjjZ 0glVYzT8b60+h9zscisMdMX39uVOk8pfDRvjWCfb9a1WRjs5l1M/EQsDC+4FZT4MX5GYiXljg04PA y/Dfb8KA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwZX-008AQX-Vg; Mon, 08 Nov 2021 04:48:20 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 18/28] iomap: Convert iomap_page_mkwrite to use a folio Date: Mon, 8 Nov 2021 04:05:41 +0000 Message-Id: <20211108040551.1942823-19-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If we write to any page in a folio, we have to mark the entire folio as dirty, and potentially COW the entire folio, because it'll all get written back as one unit. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index b0b402e1779e..64e54981b651 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -960,10 +960,9 @@ iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, } EXPORT_SYMBOL_GPL(iomap_truncate_page); -static loff_t iomap_page_mkwrite_iter(struct iomap_iter *iter, - struct page *page) +static loff_t iomap_folio_mkwrite_iter(struct iomap_iter *iter, + struct folio *folio) { - struct folio *folio = page_folio(page); loff_t length = iomap_length(iter); int ret; @@ -972,10 +971,10 @@ static loff_t iomap_page_mkwrite_iter(struct iomap_iter *iter, &iter->iomap); if (ret) return ret; - block_commit_write(page, 0, length); + block_commit_write(&folio->page, 0, length); } else { - WARN_ON_ONCE(!PageUptodate(page)); - set_page_dirty(page); + WARN_ON_ONCE(!folio_test_uptodate(folio)); + folio_mark_dirty(folio); } return length; @@ -987,24 +986,24 @@ vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops) .inode = file_inode(vmf->vma->vm_file), .flags = IOMAP_WRITE | IOMAP_FAULT, }; - struct page *page = vmf->page; + struct folio *folio = page_folio(vmf->page); ssize_t ret; - lock_page(page); - ret = page_mkwrite_check_truncate(page, iter.inode); + folio_lock(folio); + ret = folio_mkwrite_check_truncate(folio, iter.inode); if (ret < 0) goto out_unlock; - iter.pos = page_offset(page); + iter.pos = folio_pos(folio); iter.len = ret; while ((ret = iomap_iter(&iter, ops)) > 0) - iter.processed = iomap_page_mkwrite_iter(&iter, page); + iter.processed = iomap_folio_mkwrite_iter(&iter, folio); if (ret < 0) goto out_unlock; - wait_for_stable_page(page); + folio_wait_stable(folio); return VM_FAULT_LOCKED; out_unlock: - unlock_page(page); + folio_unlock(folio); return block_page_mkwrite_return(ret); } EXPORT_SYMBOL_GPL(iomap_page_mkwrite); From patchwork Mon Nov 8 04:05:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8588C4332F for ; Mon, 8 Nov 2021 04:54:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A861661390 for ; Mon, 8 Nov 2021 04:54:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229813AbhKHE4w (ORCPT ); Sun, 7 Nov 2021 23:56:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234659AbhKHE4w (ORCPT ); Sun, 7 Nov 2021 23:56:52 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 947A2C061570; Sun, 7 Nov 2021 20:54:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=1MfuoVJ9jJTxJ9WkmEmBNOz/A1EWBAGH6ihCCWxw9G4=; b=TsafeHe+4IEkpDH9w0d/ySFVBJ rWKmiz+I+X6BaJlm8kkLoHvDbeynS1sPNU1IAgaTGt5JS3B7Ms3ejvtAsbheZJOqZ3rhY8nO4Fj9m Kv0j2/3zsoF3KnoHQA+tduRLai3gi8n3I2OmyW3XR2IBamLDk/qmelZBY66NM9asuQ2sDz4BNoxIO Vb1IVNiqLTFsC+Hi7Iq0Ur6/22D2/c/JEsAdUjXNOtoblsTXb525LUV/DJsVz5TSEeZc4dSQQZWZr Ch5+KlaP/DppZHrHbjL/JF/pw6+3eMkhmwlJghjboDtXikc+yfD/W+8PXBVixuWTIk96ChaAk/6pX jJbHyThA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwb9-008ASA-HV; Mon, 08 Nov 2021 04:50:23 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig Subject: [PATCH v2 19/28] iomap: Convert __iomap_zero_iter to use a folio Date: Mon, 8 Nov 2021 04:05:42 +0000 Message-Id: <20211108040551.1942823-20-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The zero iterator can work in folio-sized chunks instead of page-sized chunks. This will save a lot of page cache lookups if the file is cached in multi-page folios. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 64e54981b651..9c61d12028ca 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -881,17 +881,20 @@ EXPORT_SYMBOL_GPL(iomap_file_unshare); static s64 __iomap_zero_iter(struct iomap_iter *iter, loff_t pos, u64 length) { + struct folio *folio; struct page *page; int status; - unsigned offset = offset_in_page(pos); - unsigned bytes = min_t(u64, PAGE_SIZE - offset, length); + size_t offset, bytes; - status = iomap_write_begin(iter, pos, bytes, &page); + status = iomap_write_begin(iter, pos, length, &page); if (status) return status; + folio = page_folio(page); - zero_user(page, offset, bytes); - mark_page_accessed(page); + offset = offset_in_folio(folio, pos); + bytes = min_t(u64, folio_size(folio) - offset, length); + folio_zero_range(folio, offset, bytes); + folio_mark_accessed(folio); return iomap_write_end(iter, pos, bytes, bytes, page); } From patchwork Mon Nov 8 04:05:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 622AEC433F5 for ; Mon, 8 Nov 2021 04:57:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4774360FD8 for ; Mon, 8 Nov 2021 04:57:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235231AbhKHE75 (ORCPT ); Sun, 7 Nov 2021 23:59:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234781AbhKHE75 (ORCPT ); Sun, 7 Nov 2021 23:59:57 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BFA4C061570; Sun, 7 Nov 2021 20:57:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EmdbHROb+rTV1VQnL+JR0UeUpFTLI1y3vQA3Rgu/pOc=; b=F9dVP7y8zWosBH3zJxH8nAgUvQ xs/P+Y9uxofqS/yi+WMr6/Fl5bx5gC2umnvtP8I8lWTYyw0Zf6wLs9s8ZCChz4Tm8Q38NRD7TU19T //PG7IIkFgfBUK20t8LaWI6GdndsiuCOBXRQXlup6b+lo1RxcOon1d0poT0E7KIW6n+fY/Wer8hcD FPZLQHKH9NJh+lQmXo2by9Dhy2rOVvb6et6of50DLgapHtQiVhOsBNhuwE7nawSY8z6RiHi6M2F1b hJQOyqsc2yklUH7H9V83b1Zf7ytcYKR4yBs8YPPUgTvHk8pn0K061SnDmopuw9BiwKqxDeE39VKoA MVS45SXg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwdX-008AWq-D1; Mon, 08 Nov 2021 04:52:50 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 20/28] iomap: Convert iomap_write_begin() and iomap_write_end() to folios Date: Mon, 8 Nov 2021 04:05:43 +0000 Message-Id: <20211108040551.1942823-21-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org These functions still only work in PAGE_SIZE chunks, but there are fewer conversions from tail to head pages as a result of this patch. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 66 ++++++++++++++++++++---------------------- 1 file changed, 31 insertions(+), 35 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 9c61d12028ca..f4ae200adc4c 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -539,9 +539,8 @@ static int iomap_read_folio_sync(loff_t block_start, struct folio *folio, } static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, - unsigned len, struct page *page) + size_t len, struct folio *folio) { - struct folio *folio = page_folio(page); const struct iomap *srcmap = iomap_iter_srcmap(iter); struct iomap_page *iop = iomap_page_create(iter->inode, folio); loff_t block_size = i_blocksize(iter->inode); @@ -582,9 +581,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, } static int iomap_write_begin_inline(const struct iomap_iter *iter, - struct page *page) + struct folio *folio) { - struct folio *folio = page_folio(page); int ret; /* needs more work for the tailpacking case; disable for now */ @@ -597,12 +595,12 @@ static int iomap_write_begin_inline(const struct iomap_iter *iter, } static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, - unsigned len, struct page **pagep) + size_t len, struct folio **foliop) { const struct iomap_page_ops *page_ops = iter->iomap.page_ops; const struct iomap *srcmap = iomap_iter_srcmap(iter); - struct page *page; struct folio *folio; + unsigned fgp = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE | FGP_NOFS; int status = 0; BUG_ON(pos + len > iter->iomap.offset + iter->iomap.length); @@ -618,30 +616,29 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, return status; } - page = grab_cache_page_write_begin(iter->inode->i_mapping, - pos >> PAGE_SHIFT, AOP_FLAG_NOFS); - if (!page) { + folio = __filemap_get_folio(iter->inode->i_mapping, pos >> PAGE_SHIFT, + fgp, mapping_gfp_mask(iter->inode->i_mapping)); + if (!folio) { status = -ENOMEM; goto out_no_page; } - folio = page_folio(page); if (srcmap->type == IOMAP_INLINE) - status = iomap_write_begin_inline(iter, page); + status = iomap_write_begin_inline(iter, folio); else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) status = __block_write_begin_int(folio, pos, len, NULL, srcmap); else - status = __iomap_write_begin(iter, pos, len, page); + status = __iomap_write_begin(iter, pos, len, folio); if (unlikely(status)) goto out_unlock; - *pagep = page; + *foliop = folio; return 0; out_unlock: - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); iomap_write_failed(iter->inode, pos, len); out_no_page: @@ -651,11 +648,10 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, } static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, - size_t copied, struct page *page) + size_t copied, struct folio *folio) { - struct folio *folio = page_folio(page); struct iomap_page *iop = to_iomap_page(folio); - flush_dcache_page(page); + flush_dcache_folio(folio); /* * The blocks that were entirely written will now be uptodate, so we @@ -668,10 +664,10 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, * non-uptodate page as a zero-length write, and force the caller to * redo the whole thing. */ - if (unlikely(copied < len && !PageUptodate(page))) + if (unlikely(copied < len && !folio_test_uptodate(folio))) return 0; iomap_set_range_uptodate(folio, iop, offset_in_folio(folio, pos), len); - __set_page_dirty_nobuffers(page); + filemap_dirty_folio(inode->i_mapping, folio); return copied; } @@ -695,7 +691,7 @@ static size_t iomap_write_end_inline(const struct iomap_iter *iter, /* Returns the number of bytes copied. May be 0. Cannot be an errno. */ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, - size_t copied, struct page *page) + size_t copied, struct folio *folio) { const struct iomap_page_ops *page_ops = iter->iomap.page_ops; const struct iomap *srcmap = iomap_iter_srcmap(iter); @@ -706,9 +702,9 @@ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, ret = iomap_write_end_inline(iter, page, pos, copied); } else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) { ret = block_write_end(NULL, iter->inode->i_mapping, pos, len, - copied, page, NULL); + copied, &folio->page, NULL); } else { - ret = __iomap_write_end(iter->inode, pos, len, copied, page); + ret = __iomap_write_end(iter->inode, pos, len, copied, folio); } /* @@ -720,13 +716,13 @@ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, i_size_write(iter->inode, pos + ret); iter->iomap.flags |= IOMAP_F_SIZE_CHANGED; } - unlock_page(page); + folio_unlock(folio); if (old_size < pos) pagecache_isize_extended(iter->inode, old_size, pos); if (page_ops && page_ops->page_done) - page_ops->page_done(iter->inode, pos, ret, page); - put_page(page); + page_ops->page_done(iter->inode, pos, ret, &folio->page); + folio_put(folio); if (ret < len) iomap_write_failed(iter->inode, pos, len); @@ -741,6 +737,7 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) long status = 0; do { + struct folio *folio; struct page *page; unsigned long offset; /* Offset into pagecache page */ unsigned long bytes; /* Bytes to write to page */ @@ -764,16 +761,17 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) break; } - status = iomap_write_begin(iter, pos, bytes, &page); + status = iomap_write_begin(iter, pos, bytes, &folio); if (unlikely(status)) break; + page = folio_file_page(folio, pos >> PAGE_SHIFT); if (mapping_writably_mapped(iter->inode->i_mapping)) flush_dcache_page(page); copied = copy_page_from_iter_atomic(page, offset, bytes, i); - status = iomap_write_end(iter, pos, bytes, copied, page); + status = iomap_write_end(iter, pos, bytes, copied, folio); if (unlikely(copied != status)) iov_iter_revert(i, copied - status); @@ -839,13 +837,13 @@ static loff_t iomap_unshare_iter(struct iomap_iter *iter) do { unsigned long offset = offset_in_page(pos); unsigned long bytes = min_t(loff_t, PAGE_SIZE - offset, length); - struct page *page; + struct folio *folio; - status = iomap_write_begin(iter, pos, bytes, &page); + status = iomap_write_begin(iter, pos, bytes, &folio); if (unlikely(status)) return status; - status = iomap_write_end(iter, pos, bytes, bytes, page); + status = iomap_write_end(iter, pos, bytes, bytes, folio); if (WARN_ON_ONCE(status == 0)) return -EIO; @@ -882,21 +880,19 @@ EXPORT_SYMBOL_GPL(iomap_file_unshare); static s64 __iomap_zero_iter(struct iomap_iter *iter, loff_t pos, u64 length) { struct folio *folio; - struct page *page; int status; size_t offset, bytes; - status = iomap_write_begin(iter, pos, length, &page); + status = iomap_write_begin(iter, pos, length, &folio); if (status) return status; - folio = page_folio(page); offset = offset_in_folio(folio, pos); bytes = min_t(u64, folio_size(folio) - offset, length); folio_zero_range(folio, offset, bytes); folio_mark_accessed(folio); - return iomap_write_end(iter, pos, bytes, bytes, page); + return iomap_write_end(iter, pos, bytes, bytes, folio); } static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero) From patchwork Mon Nov 8 04:05:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB4E7C433FE for ; Mon, 8 Nov 2021 04:59:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B3693613A6 for ; Mon, 8 Nov 2021 04:59:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229522AbhKHFCT (ORCPT ); Mon, 8 Nov 2021 00:02:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229450AbhKHFCS (ORCPT ); Mon, 8 Nov 2021 00:02:18 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D10D4C061570; Sun, 7 Nov 2021 20:59:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=rVWwnvlYtMYK/tsNbIb68fg87VwCrXsoAMMv96ZQhKM=; b=aGBJf1jW6NXDQe+fAPDx5v4/NL uacbQJoEXKgX801UW3cAQKryLqp/9XuGA3KvhNAbF+fRHjdvV2WcmhxvxUoM50tS+bYmJeCbRVnMa DV93AMcpkWMKmvfmniINCWbPMnZKPw0PoFC8VflW5kqN4BTCHJxHRpYqz86YN1Es2uIf8I9B5Q9xn 4LhNGnDFO4MvobKI6EXyyXE102YRoZuYuVYrV+OQ+yHBsfTEAncf5KBpu6jNIrDfvHcIeYHbbEbEZ 2svGprAh1YHsw2fMexOoOC2OP39hN3o6OompfDzsVqq9U+tRCpejdf1+qy6K89Rh9YmVZ48SL0Gaw HKx3RuAQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwfu-008AaQ-A4; Mon, 08 Nov 2021 04:55:33 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 21/28] iomap: Convert iomap_write_end_inline to take a folio Date: Mon, 8 Nov 2021 04:05:44 +0000 Message-Id: <20211108040551.1942823-22-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This conversion is only safe because iomap only supports writes to inline data which starts at the beginning of the file. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index f4ae200adc4c..6b73d070e3a1 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -672,16 +672,16 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, } static size_t iomap_write_end_inline(const struct iomap_iter *iter, - struct page *page, loff_t pos, size_t copied) + struct folio *folio, loff_t pos, size_t copied) { const struct iomap *iomap = &iter->iomap; void *addr; - WARN_ON_ONCE(!PageUptodate(page)); + WARN_ON_ONCE(!folio_test_uptodate(folio)); BUG_ON(!iomap_inline_data_valid(iomap)); - flush_dcache_page(page); - addr = kmap_local_page(page) + pos; + flush_dcache_folio(folio); + addr = kmap_local_folio(folio, pos); memcpy(iomap_inline_data(iomap, pos), addr, copied); kunmap_local(addr); @@ -699,7 +699,7 @@ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, size_t ret; if (srcmap->type == IOMAP_INLINE) { - ret = iomap_write_end_inline(iter, page, pos, copied); + ret = iomap_write_end_inline(iter, folio, pos, copied); } else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) { ret = block_write_end(NULL, iter->inode->i_mapping, pos, len, copied, &folio->page, NULL); From patchwork Mon Nov 8 04:05:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37A5CC4332F for ; Mon, 8 Nov 2021 05:02:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 145C961208 for ; Mon, 8 Nov 2021 05:02:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229875AbhKHFFJ (ORCPT ); Mon, 8 Nov 2021 00:05:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229450AbhKHFFJ (ORCPT ); Mon, 8 Nov 2021 00:05:09 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CEFCC061570; Sun, 7 Nov 2021 21:02:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hqzk9C0GyehpZY+gPkl7vEk6lhsjyTSZv0sKQclzvy4=; b=ijdPy9dcNtaaWmOOUJ5D/1IrHN Ro8uT5rK6dnv96Yd/DBc70v9bqq2KJhcCDmnPeAod92IM9k/EZagG3I+i8ZnmOQV674bQlzElLM1S GQyz97nshnvVBgEyAK+qToCYN5W4jDjn0FG54Fl35k21P868PsViQBycqkSxVHQAXB6/Q8VaxsXrL spySCCURpn4oa7e/COwzFko3V51TVGhJOaRvOEsKUqU1HjLr5jC63IzABvZKMkJngOGR6aNV1lc7+ uxcLru3Il6omxGFisFqPXcSNkOB/bgWXjryFPpYPtNJFmo+5ywh5hHvNMIfbCA1prcM6STkwLYRSL nC95NFNA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwiz-008Aeu-Tb; Mon, 08 Nov 2021 04:58:53 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 22/28] iomap,xfs: Convert ->discard_page to ->discard_folio Date: Mon, 8 Nov 2021 04:05:45 +0000 Message-Id: <20211108040551.1942823-23-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org XFS has the only implementation of ->discard_page today, so convert it to use folios in the same patch as converting the API. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 4 ++-- fs/xfs/xfs_aops.c | 24 ++++++++++++------------ include/linux/iomap.h | 2 +- 3 files changed, 15 insertions(+), 15 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 6b73d070e3a1..20610b1364d6 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1346,8 +1346,8 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * won't be affected by I/O completion and we must unlock it * now. */ - if (wpc->ops->discard_page) - wpc->ops->discard_page(page, file_offset); + if (wpc->ops->discard_folio) + wpc->ops->discard_folio(folio, file_offset); if (!count) { ClearPageUptodate(page); unlock_page(page); diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index c8c15c3c3147..4098a9875c5b 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -437,37 +437,37 @@ xfs_prepare_ioend( * see a ENOSPC in writeback). */ static void -xfs_discard_page( - struct page *page, - loff_t fileoff) +xfs_discard_folio( + struct folio *folio, + loff_t pos) { - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; struct xfs_inode *ip = XFS_I(inode); struct xfs_mount *mp = ip->i_mount; - unsigned int pageoff = offset_in_page(fileoff); - xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, fileoff); - xfs_fileoff_t pageoff_fsb = XFS_B_TO_FSBT(mp, pageoff); + size_t offset = offset_in_folio(folio, pos); + xfs_fileoff_t start_fsb = XFS_B_TO_FSBT(mp, pos); + xfs_fileoff_t pageoff_fsb = XFS_B_TO_FSBT(mp, offset); int error; if (xfs_is_shutdown(mp)) goto out_invalidate; xfs_alert_ratelimited(mp, - "page discard on page "PTR_FMT", inode 0x%llx, offset %llu.", - page, ip->i_ino, fileoff); + "page discard on page "PTR_FMT", inode 0x%llx, pos %llu.", + folio, ip->i_ino, pos); error = xfs_bmap_punch_delalloc_range(ip, start_fsb, - i_blocks_per_page(inode, page) - pageoff_fsb); + i_blocks_per_folio(inode, folio) - pageoff_fsb); if (error && !xfs_is_shutdown(mp)) xfs_alert(mp, "page discard unable to remove delalloc mapping."); out_invalidate: - iomap_invalidatepage(page, pageoff, PAGE_SIZE - pageoff); + iomap_invalidate_folio(folio, offset, folio_size(folio) - offset); } static const struct iomap_writeback_ops xfs_writeback_ops = { .map_blocks = xfs_map_blocks, .prepare_ioend = xfs_prepare_ioend, - .discard_page = xfs_discard_page, + .discard_folio = xfs_discard_folio, }; STATIC int diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 29491fb9c5ba..5ef5088dbbd8 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -285,7 +285,7 @@ struct iomap_writeback_ops { * Optional, allows the file system to discard state on a page where * we failed to submit any I/O. */ - void (*discard_page)(struct page *page, loff_t fileoff); + void (*discard_folio)(struct folio *folio, loff_t pos); }; struct iomap_writepage_ctx { From patchwork Mon Nov 8 04:05:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C984C433FE for ; Mon, 8 Nov 2021 05:04:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7075F61208 for ; Mon, 8 Nov 2021 05:04:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230205AbhKHFHI (ORCPT ); Mon, 8 Nov 2021 00:07:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229450AbhKHFHH (ORCPT ); Mon, 8 Nov 2021 00:07:07 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C57DC061570; Sun, 7 Nov 2021 21:04:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7IXo2o8FJyRMGLPdB55C0nZsqEEI7PRbrvWxFCTwLcw=; b=ugvPMeOprIW156VNNFFhIlFBRn 16x5ztD9wQHFqzxo6xuXBvg2BLnBWtzBb6qip0H+8Q/9bVDKJWa+Q3L2y3lCCWsln/P72sA2UAR22 UIHsLRAG734lE+7okwnjTltXn/hIO74f6eFM8UJDYElJ2IJiTeuOtfqKY5Iw/3lNXTVt8N2dc1qEk RO89/vCaTGFNyIkeXwSQ/UHiIz/LH8pBP4vkGkQ3sYZfVis11P8FQXXyV+VMCA3OEdStMKrw7zNil HFfxZZEQ1dkQUl6zinH+WtofQXdjcnL6iFYOToWbpd6t8AbGZzRDfLwb0HkBkLdGgIsVDXc7PZq2Z qwSFtl4w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwlk-008Akf-Ei; Mon, 08 Nov 2021 05:01:12 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 23/28] iomap: Simplify iomap_writepage_map() Date: Mon, 8 Nov 2021 04:05:46 +0000 Message-Id: <20211108040551.1942823-24-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Rename end_offset to end_pos and file_offset to pos to match the rest of the file. Simplify the loop by calculating nblocks up front instead of each time around the loop. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 20610b1364d6..87190b86ef1f 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1293,37 +1293,36 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, - struct page *page, u64 end_offset) + struct page *page, u64 end_pos) { struct folio *folio = page_folio(page); struct iomap_page *iop = iomap_page_create(inode, folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); - u64 file_offset; /* file offset of page */ + unsigned nblocks = i_blocks_per_folio(inode, folio); + u64 pos = folio_pos(folio); int error = 0, count = 0, i; LIST_HEAD(submit_list); WARN_ON_ONCE(iop && atomic_read(&iop->write_bytes_pending) != 0); /* - * Walk through the page to find areas to write back. If we run off the - * end of the current map or find the current map invalid, grab a new - * one. + * Walk through the folio to find areas to write back. If we + * run off the end of the current map or find the current map + * invalid, grab a new one. */ - for (i = 0, file_offset = page_offset(page); - i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset; - i++, file_offset += len) { + for (i = 0; i < nblocks && pos < end_pos; i++, pos += len) { if (iop && !test_bit(i, iop->uptodate)) continue; - error = wpc->ops->map_blocks(wpc, inode, file_offset); + error = wpc->ops->map_blocks(wpc, inode, pos); if (error) break; if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE)) continue; if (wpc->iomap.type == IOMAP_HOLE) continue; - iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc, + iomap_add_to_ioend(inode, pos, page, iop, wpc, wbc, &submit_list); count++; } @@ -1347,7 +1346,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * now. */ if (wpc->ops->discard_folio) - wpc->ops->discard_folio(folio, file_offset); + wpc->ops->discard_folio(folio, pos); if (!count) { ClearPageUptodate(page); unlock_page(page); From patchwork Mon Nov 8 04:05:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0440DC433EF for ; Mon, 8 Nov 2021 05:06:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DBE9461357 for ; Mon, 8 Nov 2021 05:06:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231268AbhKHFIu (ORCPT ); Mon, 8 Nov 2021 00:08:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230391AbhKHFIt (ORCPT ); Mon, 8 Nov 2021 00:08:49 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7208C061570; Sun, 7 Nov 2021 21:06:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ikXVfb3U7753RVO9peExC5rw1TDOslVL28+m2Sxvq0U=; b=SXKuFm0Xf05uGZuTi/Efy41wbA 2XGeUfX3gdJGqjg03noxDJ1jTsx/oSEeMswJ2Ne7yQisOKse62Kovgsdad6V48rbA5R6kh6izi4kk 8RGZjU7R+HXctSKhKl5S18HrHZVC03dJb51vfun8NzbpJBwTt3/9G66L8uD+K2GgB6twSMJx1NC+N LXhF3MWUYNhKRDy49FDGFTyCUm/vdDlKMf7i1B5ulHQIZT8GQFE6etZd1b/9bbX+p0tGShA6pSovZ cF4FAafZ1Eb8rwbDCcCG9jl771R+el5zW1D2xx3eD3ZVM+TR/ekrHWzMszWa3tT4zVnx5tFtEQz54 /Cv1Vc2A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwnm-008AoK-34; Mon, 08 Nov 2021 05:02:57 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 24/28] iomap: Simplify iomap_do_writepage() Date: Mon, 8 Nov 2021 04:05:47 +0000 Message-Id: <20211108040551.1942823-25-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Rename end_offset to end_pos and offset_into_page to poff to match the rest of the file. Simplify the handling of the last page straddling i_size by doing the EOF check based on the byte granularity i_size instead of converting to a pgoff prematurely. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 23 ++++++++++------------- 1 file changed, 10 insertions(+), 13 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 87190b86ef1f..b168cc0fe8be 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1394,9 +1394,7 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) { struct iomap_writepage_ctx *wpc = data; struct inode *inode = page->mapping->host; - pgoff_t end_index; - u64 end_offset; - loff_t offset; + u64 end_pos, isize; trace_iomap_writepage(inode, page_offset(page), PAGE_SIZE); @@ -1427,11 +1425,9 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * | desired writeback range | see else | * ---------------------------------^------------------| */ - offset = i_size_read(inode); - end_index = offset >> PAGE_SHIFT; - if (page->index < end_index) - end_offset = (loff_t)(page->index + 1) << PAGE_SHIFT; - else { + isize = i_size_read(inode); + end_pos = page_offset(page) + PAGE_SIZE; + if (end_pos > isize) { /* * Check whether the page to write out is beyond or straddles * i_size or not. @@ -1443,7 +1439,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * | | Straddles | * ---------------------------------^-----------|--------| */ - unsigned offset_into_page = offset & (PAGE_SIZE - 1); + size_t poff = offset_in_page(isize); + pgoff_t end_index = isize >> PAGE_SHIFT; /* * Skip the page if it's fully outside i_size, e.g. due to a @@ -1463,7 +1460,7 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * offset is just equal to the EOF. */ if (page->index > end_index || - (page->index == end_index && offset_into_page == 0)) + (page->index == end_index && poff == 0)) goto redirty; /* @@ -1474,13 +1471,13 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * memory is zeroed when mapped, and writes to that region are * not written out to the file." */ - zero_user_segment(page, offset_into_page, PAGE_SIZE); + zero_user_segment(page, poff, PAGE_SIZE); /* Adjust the end_offset to the end of file */ - end_offset = offset; + end_pos = isize; } - return iomap_writepage_map(wpc, wbc, inode, page, end_offset); + return iomap_writepage_map(wpc, wbc, inode, page, end_pos); redirty: redirty_page_for_writepage(wbc, page); From patchwork Mon Nov 8 04:05:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BFA9C433F5 for ; Mon, 8 Nov 2021 05:07:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 17D9E60D42 for ; Mon, 8 Nov 2021 05:07:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232457AbhKHFJ6 (ORCPT ); Mon, 8 Nov 2021 00:09:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231947AbhKHFJ5 (ORCPT ); Mon, 8 Nov 2021 00:09:57 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C71DCC061570; Sun, 7 Nov 2021 21:07:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=3aaCDVfG47oywaemBl2GN9Mv/+JwqTsxtdXzUkzFH+g=; b=POs0LgCPGPLUDA9UFRZT1xYv53 D0GAmtyy5Djl/wLP1CRN6SQYXNCxxizb8aqGx32mr+S4CT4XbJENrngTUpFlnTTei6895dsBCdb4e rLRHy2SvE3D/Tm037cGs8D+iQdbCcc2nmWN+okGRna9Aw66zl12M7PbxULpzBxX9HTaIDQBVfXrKE CKEXXzwhl27y3pAvtw4Wanpa5K9HaEnZNaixq40mYmm21xD7VVHKnQ5fGVTQ0KMQvtLl/X6CqHUER ce3V2lcOx9gFuQna6r33ncUTKH52IzVSDJk75+OR1HAKul4KS9ZJr8goAaDa9Vk5wO9WB8qiY82jZ Qo2H1AHA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwp4-008Aqa-4Y; Mon, 08 Nov 2021 05:04:22 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 25/28] iomap: Convert iomap_add_to_ioend() to take a folio Date: Mon, 8 Nov 2021 04:05:48 +0000 Message-Id: <20211108040551.1942823-26-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We still iterate one block at a time, but now we call compound_head() less often. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 70 ++++++++++++++++++++---------------------- 1 file changed, 34 insertions(+), 36 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index b168cc0fe8be..90f9f33ffe41 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1249,29 +1249,29 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset, * first; otherwise finish off the current ioend and start another. */ static void -iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, +iomap_add_to_ioend(struct inode *inode, loff_t pos, struct folio *folio, struct iomap_page *iop, struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct list_head *iolist) { - sector_t sector = iomap_sector(&wpc->iomap, offset); + sector_t sector = iomap_sector(&wpc->iomap, pos); unsigned len = i_blocksize(inode); - unsigned poff = offset & (PAGE_SIZE - 1); + size_t poff = offset_in_folio(folio, pos); - if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) { + if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, pos, sector)) { if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc); + wpc->ioend = iomap_alloc_ioend(inode, wpc, pos, sector, wbc); } - if (bio_add_page(wpc->ioend->io_bio, page, len, poff) != len) { + if (!bio_add_folio(wpc->ioend->io_bio, folio, len, poff)) { wpc->ioend->io_bio = iomap_chain_bio(wpc->ioend->io_bio); - __bio_add_page(wpc->ioend->io_bio, page, len, poff); + bio_add_folio(wpc->ioend->io_bio, folio, len, poff); } if (iop) atomic_add(len, &iop->write_bytes_pending); wpc->ioend->io_size += len; - wbc_account_cgroup_owner(wbc, page, len); + wbc_account_cgroup_owner(wbc, &folio->page, len); } /* @@ -1293,9 +1293,8 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, static int iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, - struct page *page, u64 end_pos) + struct folio *folio, u64 end_pos) { - struct folio *folio = page_folio(page); struct iomap_page *iop = iomap_page_create(inode, folio); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); @@ -1322,15 +1321,15 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, continue; if (wpc->iomap.type == IOMAP_HOLE) continue; - iomap_add_to_ioend(inode, pos, page, iop, wpc, wbc, + iomap_add_to_ioend(inode, pos, folio, iop, wpc, wbc, &submit_list); count++; } WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list)); - WARN_ON_ONCE(!PageLocked(page)); - WARN_ON_ONCE(PageWriteback(page)); - WARN_ON_ONCE(PageDirty(page)); + WARN_ON_ONCE(!folio_test_locked(folio)); + WARN_ON_ONCE(folio_test_writeback(folio)); + WARN_ON_ONCE(folio_test_dirty(folio)); /* * We cannot cancel the ioend directly here on error. We may have @@ -1348,14 +1347,14 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, if (wpc->ops->discard_folio) wpc->ops->discard_folio(folio, pos); if (!count) { - ClearPageUptodate(page); - unlock_page(page); + folio_clear_uptodate(folio); + folio_unlock(folio); goto done; } } - set_page_writeback(page); - unlock_page(page); + folio_start_writeback(folio); + folio_unlock(folio); /* * Preserve the original error if there was one; catch @@ -1376,9 +1375,9 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * with a partial page truncate on a sub-page block sized filesystem. */ if (!count) - end_page_writeback(page); + folio_end_writeback(folio); done: - mapping_set_error(page->mapping, error); + mapping_set_error(folio->mapping, error); return error; } @@ -1392,14 +1391,15 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, static int iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) { + struct folio *folio = page_folio(page); struct iomap_writepage_ctx *wpc = data; - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; u64 end_pos, isize; - trace_iomap_writepage(inode, page_offset(page), PAGE_SIZE); + trace_iomap_writepage(inode, folio_pos(folio), folio_size(folio)); /* - * Refuse to write the page out if we're called from reclaim context. + * Refuse to write the folio out if we're called from reclaim context. * * This avoids stack overflows when called from deeply used stacks in * random callers for direct reclaim or memcg reclaim. We explicitly @@ -1413,10 +1413,10 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) goto redirty; /* - * Is this page beyond the end of the file? + * Is this folio beyond the end of the file? * - * The page index is less than the end_index, adjust the end_offset - * to the highest offset that this page should represent. + * The folio index is less than the end_index, adjust the end_pos + * to the highest offset that this folio should represent. * ----------------------------------------------------- * | file mapping | | * ----------------------------------------------------- @@ -1426,7 +1426,7 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * ---------------------------------^------------------| */ isize = i_size_read(inode); - end_pos = page_offset(page) + PAGE_SIZE; + end_pos = folio_pos(folio) + folio_size(folio); if (end_pos > isize) { /* * Check whether the page to write out is beyond or straddles @@ -1439,7 +1439,7 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * | | Straddles | * ---------------------------------^-----------|--------| */ - size_t poff = offset_in_page(isize); + size_t poff = offset_in_folio(folio, isize); pgoff_t end_index = isize >> PAGE_SHIFT; /* @@ -1459,8 +1459,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * checking if the page is totally beyond i_size or if its * offset is just equal to the EOF. */ - if (page->index > end_index || - (page->index == end_index && poff == 0)) + if (folio->index > end_index || + (folio->index == end_index && poff == 0)) goto redirty; /* @@ -1471,17 +1471,15 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * memory is zeroed when mapped, and writes to that region are * not written out to the file." */ - zero_user_segment(page, poff, PAGE_SIZE); - - /* Adjust the end_offset to the end of file */ + folio_zero_segment(folio, poff, folio_size(folio)); end_pos = isize; } - return iomap_writepage_map(wpc, wbc, inode, page, end_pos); + return iomap_writepage_map(wpc, wbc, inode, folio, end_pos); redirty: - redirty_page_for_writepage(wbc, page); - unlock_page(page); + folio_redirty_for_writepage(wbc, folio); + folio_unlock(folio); return 0; } From patchwork Mon Nov 8 04:05:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82681C433FE for ; Mon, 8 Nov 2021 05:08:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6368561357 for ; Mon, 8 Nov 2021 05:08:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230214AbhKHFLU (ORCPT ); Mon, 8 Nov 2021 00:11:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229988AbhKHFLU (ORCPT ); Mon, 8 Nov 2021 00:11:20 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E597C061570; Sun, 7 Nov 2021 21:08:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wRvLnGx2ea/332rHz/tY/wsbBDAO1GNqTpTBxT5LYMg=; b=ly9K6nDhscWCDxpeWoIJXZ8Suw YXo19E24kKuQpHcyh+BnBAXJnvzBR4V+LRSusQ5+HZOAcjqGBN0H57ReWGn2FSAd0kNoFcwSD1yYN NwAjjUYZkOfGQNfFHI/c3GH4r9e2YEi/djH6iiAuuFG/T3FURmjh9JO4LnTUFCl+2vv71jI8g+H4F CAgItCixbD0KytosPGxp+wmHxcGz89Js4Zp3yJ51ZIS9onPWrD1d55YCnMPbZPE+RfyUIMG+wgAzS eeQN+c8cteOQWqbO/QUs+bZ8Snlf5/19tv8bOtGMnzmMteQZRz17YWMkdkNQoJvqCjT5+/bjKk5jb AmZusLhw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwqc-008Asi-R6; Mon, 08 Nov 2021 05:06:04 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 26/28] iomap: Convert iomap_migrate_page() to use folios Date: Mon, 8 Nov 2021 04:05:49 +0000 Message-Id: <20211108040551.1942823-27-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The arguments are still pages for now, but we can use folios internally and cut out a lot of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 90f9f33ffe41..6830e4c15c61 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -493,19 +493,21 @@ int iomap_migrate_page(struct address_space *mapping, struct page *newpage, struct page *page, enum migrate_mode mode) { + struct folio *folio = page_folio(page); + struct folio *newfolio = page_folio(newpage); int ret; - ret = migrate_page_move_mapping(mapping, newpage, page, 0); + ret = folio_migrate_mapping(mapping, newfolio, folio, 0); if (ret != MIGRATEPAGE_SUCCESS) return ret; - if (page_has_private(page)) - attach_page_private(newpage, detach_page_private(page)); + if (folio_test_private(folio)) + folio_attach_private(newfolio, folio_detach_private(folio)); if (mode != MIGRATE_SYNC_NO_COPY) - migrate_page_copy(newpage, page); + folio_migrate_copy(newfolio, folio); else - migrate_page_states(newpage, page); + folio_migrate_flags(newfolio, folio); return MIGRATEPAGE_SUCCESS; } EXPORT_SYMBOL_GPL(iomap_migrate_page); From patchwork Mon Nov 8 04:05:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51D77C433F5 for ; Mon, 8 Nov 2021 05:10:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 32CC760E98 for ; Mon, 8 Nov 2021 05:10:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235952AbhKHFNO (ORCPT ); Mon, 8 Nov 2021 00:13:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229988AbhKHFNO (ORCPT ); Mon, 8 Nov 2021 00:13:14 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41408C061570; Sun, 7 Nov 2021 21:10:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7XKLruoz0217yaASgnev6mQxYy91tCfaWYKYnfl/XXc=; b=K9JgAVRZMrX3qswTs8tZAvFa5s A61kU2ILIzKrLQOkUwa1M0WBUnJ/ferEnfwxdtTYMbU1iXK24HK8m6hWMKGbhVpJFe9fu+L6I1568 uq5hkEaIk0t7f+8oG7dLZxM18Uq04uWVVurUPYCPz8J+G0MF7qkrrADtB9p0kiRaTR85L7ITTKYJ4 jG5v31CtgCp+9sDqTThy6zmyr2GDwfFbVzhhwvKHnayIbgpaI5PpBoV9EMuuBslGeyjTG4DHlQeAj 4BO50uVN1EzbUr673TXUViecqzkahTfDu+56elWi3xEzk6FZUei0ya/WpazVBcPgCXsDopfCoUugC dLaLuIWw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwsC-008Avl-B4; Mon, 08 Nov 2021 05:07:37 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 27/28] iomap: Support multi-page folios in invalidatepage Date: Mon, 8 Nov 2021 04:05:50 +0000 Message-Id: <20211108040551.1942823-28-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If we're punching a hole in a multi-page folio, we need to remove the per-folio iomap data as the folio is about to be split and each page will need its own. If a dirty folio is only partially-uptodate, the iomap data contains the information about which blocks cannot be written back, so assert that a dirty folio is fully uptodate. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 6830e4c15c61..265c7f8e7134 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -470,13 +470,18 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len) trace_iomap_invalidatepage(folio->mapping->host, offset, len); /* - * If we're invalidating the entire page, clear the dirty state from it - * and release it to avoid unnecessary buildup of the LRU. + * If we're invalidating the entire folio, clear the dirty state + * from it and release it to avoid unnecessary buildup of the LRU. */ if (offset == 0 && len == folio_size(folio)) { WARN_ON_ONCE(folio_test_writeback(folio)); folio_cancel_dirty(folio); iomap_page_release(folio); + } else if (folio_test_multi(folio)) { + /* Must release the iop so the page can be split */ + WARN_ON_ONCE(!folio_test_uptodate(folio) && + folio_test_dirty(folio)); + iomap_page_release(folio); } } EXPORT_SYMBOL_GPL(iomap_invalidate_folio); From patchwork Mon Nov 8 04:05:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12607851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5006AC433FE for ; Mon, 8 Nov 2021 05:13:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 312F160E97 for ; Mon, 8 Nov 2021 05:13:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235206AbhKHFPq (ORCPT ); Mon, 8 Nov 2021 00:15:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232906AbhKHFPq (ORCPT ); Mon, 8 Nov 2021 00:15:46 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 723BEC061570; Sun, 7 Nov 2021 21:13:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nzuUy/RymcDXWey2K/SRNJ0dp8iKpgBkhbpM0hGxxL4=; b=MTukNndNFo7xpUENhAmjVatEPu BDGXech/R4FW5SDL9ituYcQvpsJPBe4lv/MrJmn2MrD6lr0rgEP3JR13/MjnTR8dM6g5UlrbFxC8v Yle2CRx7DA8Q3RVo6LbA6gCW2Eous6ohXI8s6sAqlPuQG5e+EICfAMuxDA2KQOWOt8cnHdN93q0jd 3SrXrsF8IYVmrWkbTRLfeNwI3jbp0AyfIWBQPvWkvgwoXyLYZA3t9rNVw6tMtM4pl7dGY7mHF31ZT 9xYYmsaAhseZYNlg5ZPrtol0LTpiGYVj1rTJTWqpW7o12mdLbc7IVQjf5+9wI9oz+XTlJU5h26U+f XjOJi98g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mjwtX-008AzS-1k; Mon, 08 Nov 2021 05:09:06 +0000 From: "Matthew Wilcox (Oracle)" To: "Darrick J . Wong " Cc: "Matthew Wilcox (Oracle)" , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Christoph Hellwig Subject: [PATCH v2 28/28] xfs: Support multi-page folios Date: Mon, 8 Nov 2021 04:05:51 +0000 Message-Id: <20211108040551.1942823-29-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211108040551.1942823-1-willy@infradead.org> References: <20211108040551.1942823-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that iomap has been converted, XFS is multi-page folio safe. Indicate to the VFS that it can now create multi-page folios for XFS. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_icache.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index e1472004170e..5380a3f001e9 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -87,6 +87,7 @@ xfs_inode_alloc( /* VFS doesn't initialise i_mode or i_state! */ VFS_I(ip)->i_mode = 0; VFS_I(ip)->i_state = 0; + mapping_set_large_folios(VFS_I(ip)->i_mapping); XFS_STATS_INC(mp, vn_active); ASSERT(atomic_read(&ip->i_pincount) == 0); @@ -336,6 +337,7 @@ xfs_reinit_inode( inode->i_rdev = dev; inode->i_uid = uid; inode->i_gid = gid; + mapping_set_large_folios(inode->i_mapping); return error; }