From patchwork Fri Aug 25 20:12:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366286 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F753C3DA66 for ; Fri, 25 Aug 2023 20:13:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230464AbjHYUNR (ORCPT ); Fri, 25 Aug 2023 16:13:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230281AbjHYUMt (ORCPT ); Fri, 25 Aug 2023 16:12:49 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7ECA12689; Fri, 25 Aug 2023 13:12:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=NkHrDQW0zdoHeGozcjLEkarrh6gEgRffl2sh1UcNT00=; b=LiLd2rchK7sKwiPV7i7Nxq1a2S xJiSIK3R5YEuWy0of6MH1M5FwCbBoJOJZIReOiUGTCzHblSExbSVkT55RVZT8iZroWJX/oEbY/ueo sAODBbLZRpWVnJRSn8jdx0Fj4T9dXm9yy3vpNtpukit4HZ+uEbY7Rar6CwszqxNFXixJf3EcuBSs+ 0YDVn5RuV/7CZlLu5b1HuZMu9TgA4t739UjYRFqwG/TiNlQ5l0UogLPOlIasKnGshnb2L7Pp7E56+ SXiBHnpRQ8eMAMcAhMVL7sQ1AK0d5MotfzwAa4ifeq1aLegjMAluYxRts9qqBhd9d6mNgyFZIh2Hu DXDwR1xw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAV-001SZe-2D; Fri, 25 Aug 2023 20:12:31 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 01/15] ceph: Convert ceph_writepages_start() to use folios a little more Date: Fri, 25 Aug 2023 21:12:11 +0100 Message-Id: <20230825201225.348148-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org After we iterate through the locked folios using filemap_get_folios_tag(), we currently convert back to a page (and then in some circumstaces back to a folio again!). Just use a folio throughout and avoid various hidden calls to compound_head(). Ceph still uses a page array to interact with the OSD which should be cleaned up in a subsequent patch. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/addr.c | 100 ++++++++++++++++++++++++------------------------- 1 file changed, 49 insertions(+), 51 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index f4863078f7fe..9a0a79833eb0 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1018,7 +1018,7 @@ static int ceph_writepages_start(struct address_space *mapping, int num_ops = 0, op_idx; unsigned i, nr_folios, max_pages, locked_pages = 0; struct page **pages = NULL, **data_pages; - struct page *page; + struct folio *folio; pgoff_t strip_unit_end = 0; u64 offset = 0, len = 0; bool from_pool = false; @@ -1032,22 +1032,22 @@ static int ceph_writepages_start(struct address_space *mapping, if (!nr_folios && !locked_pages) break; for (i = 0; i < nr_folios && locked_pages < max_pages; i++) { - page = &fbatch.folios[i]->page; - dout("? %p idx %lu\n", page, page->index); + folio = fbatch.folios[i]; + dout("? %p idx %lu\n", folio, folio->index); if (locked_pages == 0) - lock_page(page); /* first page */ - else if (!trylock_page(page)) + folio_lock(folio); /* first folio */ + else if (!folio_trylock(folio)) break; /* only dirty pages, or our accounting breaks */ - if (unlikely(!PageDirty(page)) || - unlikely(page->mapping != mapping)) { - dout("!dirty or !mapping %p\n", page); - unlock_page(page); + if (unlikely(!folio_test_dirty(folio)) || + unlikely(folio->mapping != mapping)) { + dout("!dirty or !mapping %p\n", folio); + folio_unlock(folio); continue; } /* only if matching snap context */ - pgsnapc = page_snap_context(page); + pgsnapc = folio->private; if (pgsnapc != snapc) { dout("page snapc %p %lld != oldest %p %lld\n", pgsnapc, pgsnapc->seq, snapc, snapc->seq); @@ -1055,12 +1055,10 @@ static int ceph_writepages_start(struct address_space *mapping, !ceph_wbc.head_snapc && wbc->sync_mode != WB_SYNC_NONE) should_loop = true; - unlock_page(page); + folio_unlock(folio); continue; } - if (page_offset(page) >= ceph_wbc.i_size) { - struct folio *folio = page_folio(page); - + if (folio_pos(folio) >= ceph_wbc.i_size) { dout("folio at %lu beyond eof %llu\n", folio->index, ceph_wbc.i_size); if ((ceph_wbc.size_stable || @@ -1071,31 +1069,32 @@ static int ceph_writepages_start(struct address_space *mapping, folio_unlock(folio); continue; } - if (strip_unit_end && (page->index > strip_unit_end)) { - dout("end of strip unit %p\n", page); - unlock_page(page); + if (strip_unit_end && (folio->index > strip_unit_end)) { + dout("end of strip unit %p\n", folio); + folio_unlock(folio); break; } - if (PageWriteback(page) || PageFsCache(page)) { + if (folio_test_writeback(folio) || + folio_test_fscache(folio)) { if (wbc->sync_mode == WB_SYNC_NONE) { - dout("%p under writeback\n", page); - unlock_page(page); + dout("%p under writeback\n", folio); + folio_unlock(folio); continue; } - dout("waiting on writeback %p\n", page); - wait_on_page_writeback(page); - wait_on_page_fscache(page); + dout("waiting on writeback %p\n", folio); + folio_wait_writeback(folio); + folio_wait_fscache(folio); } - if (!clear_page_dirty_for_io(page)) { - dout("%p !clear_page_dirty_for_io\n", page); - unlock_page(page); + if (!folio_clear_dirty_for_io(folio)) { + dout("%p !folio_clear_dirty_for_io\n", folio); + folio_unlock(folio); continue; } /* * We have something to write. If this is - * the first locked page this time through, + * the first locked folio this time through, * calculate max possinle write size and * allocate a page array */ @@ -1105,7 +1104,7 @@ static int ceph_writepages_start(struct address_space *mapping, u32 xlen; /* prepare async write request */ - offset = (u64)page_offset(page); + offset = folio_pos(folio); ceph_calc_file_object_mapping(&ci->i_layout, offset, wsize, &objnum, &objoff, @@ -1113,7 +1112,7 @@ static int ceph_writepages_start(struct address_space *mapping, len = xlen; num_ops = 1; - strip_unit_end = page->index + + strip_unit_end = folio->index + ((len - 1) >> PAGE_SHIFT); BUG_ON(pages); @@ -1128,23 +1127,23 @@ static int ceph_writepages_start(struct address_space *mapping, } len = 0; - } else if (page->index != + } else if (folio->index != (offset + len) >> PAGE_SHIFT) { if (num_ops >= (from_pool ? CEPH_OSD_SLAB_OPS : CEPH_OSD_MAX_OPS)) { - redirty_page_for_writepage(wbc, page); - unlock_page(page); + folio_redirty_for_writepage(wbc, folio); + folio_unlock(folio); break; } num_ops++; - offset = (u64)page_offset(page); + offset = (u64)folio_pos(folio); len = 0; } /* note position of first page in fbatch */ - dout("%p will write page %p idx %lu\n", - inode, page, page->index); + dout("%p will write folio %p idx %lu\n", + inode, folio, folio->index); if (atomic_long_inc_return(&fsc->writeback_count) > CONGESTION_ON_THRESH( @@ -1153,7 +1152,7 @@ static int ceph_writepages_start(struct address_space *mapping, if (IS_ENCRYPTED(inode)) { pages[locked_pages] = - fscrypt_encrypt_pagecache_blocks(page, + fscrypt_encrypt_pagecache_blocks(&folio->page, PAGE_SIZE, 0, locked_pages ? GFP_NOWAIT : GFP_NOFS); if (IS_ERR(pages[locked_pages])) { @@ -1163,17 +1162,17 @@ static int ceph_writepages_start(struct address_space *mapping, /* better not fail on first page! */ BUG_ON(locked_pages == 0); pages[locked_pages] = NULL; - redirty_page_for_writepage(wbc, page); - unlock_page(page); + folio_redirty_for_writepage(wbc, folio); + folio_unlock(folio); break; } ++locked_pages; } else { - pages[locked_pages++] = page; + pages[locked_pages++] = &folio->page; } fbatch.folios[i] = NULL; - len += thp_size(page); + len += folio_size(folio); } /* did we get anything? */ @@ -1222,7 +1221,7 @@ static int ceph_writepages_start(struct address_space *mapping, BUG_ON(IS_ERR(req)); } BUG_ON(len < ceph_fscrypt_page_offset(pages[locked_pages - 1]) + - thp_size(pages[locked_pages - 1]) - offset); + folio_size(folio) - offset); if (!ceph_inc_osd_stopping_blocker(fsc->mdsc)) { rc = -EIO; @@ -1236,9 +1235,9 @@ static int ceph_writepages_start(struct address_space *mapping, data_pages = pages; op_idx = 0; for (i = 0; i < locked_pages; i++) { - struct page *page = ceph_fscrypt_pagecache_page(pages[i]); + struct folio *folio = page_folio(ceph_fscrypt_pagecache_page(pages[i])); - u64 cur_offset = page_offset(page); + u64 cur_offset = folio_pos(folio); /* * Discontinuity in page range? Ceph can handle that by just passing * multiple extents in the write op. @@ -1267,10 +1266,10 @@ static int ceph_writepages_start(struct address_space *mapping, op_idx++; } - set_page_writeback(page); + folio_start_writeback(folio); if (caching) - ceph_set_page_fscache(page); - len += thp_size(page); + ceph_set_page_fscache(pages[i]); + len += folio_size(folio); } ceph_fscache_write_to_cache(inode, offset, len, caching); @@ -1280,7 +1279,7 @@ static int ceph_writepages_start(struct address_space *mapping, /* writepages_finish() clears writeback pages * according to the data length, so make sure * data length covers all locked pages */ - u64 min_len = len + 1 - thp_size(page); + u64 min_len = len + 1 - folio_size(folio); len = get_writepages_data_length(inode, pages[i - 1], offset); len = max(len, min_len); @@ -1360,7 +1359,6 @@ static int ceph_writepages_start(struct address_space *mapping, if (wbc->sync_mode != WB_SYNC_NONE && start_index == 0 && /* all dirty pages were checked */ !ceph_wbc.head_snapc) { - struct page *page; unsigned i, nr; index = 0; while ((index <= end) && @@ -1369,10 +1367,10 @@ static int ceph_writepages_start(struct address_space *mapping, PAGECACHE_TAG_WRITEBACK, &fbatch))) { for (i = 0; i < nr; i++) { - page = &fbatch.folios[i]->page; - if (page_snap_context(page) != snapc) + struct folio *folio = fbatch.folios[i]; + if (folio->private != snapc) continue; - wait_on_page_writeback(page); + folio_wait_writeback(folio); } folio_batch_release(&fbatch); cond_resched(); From patchwork Fri Aug 25 20:12:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366288 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C24D4EE49A6 for ; Fri, 25 Aug 2023 20:13:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230400AbjHYUNO (ORCPT ); Fri, 25 Aug 2023 16:13:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230273AbjHYUMo (ORCPT ); Fri, 25 Aug 2023 16:12:44 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 416882689; Fri, 25 Aug 2023 13:12:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pMnumJU38mV/KetFzw0BoCaockrMLHy/CLSU79p5iR4=; b=BL4nArnggA5NZ/JpA7aJ6aPplb 8yMnX1xsuE7Yp5TUDaTNNGKp8BBhbm6y5FVhYWKG6GDH7f4rORx0awVa1/LvsnPtRSaZajj1vL5wq aF8T26poXPtHNBsqHCAQJIAWsHfiAi88Ai9nrOnBYOdoxS5s/t6tltmyOQia70ub5gP9d1uUJSFHj iWZEz3nm8kWgUo2G7MI5me+4cghn/CNV9jzP03JfkFSKu8WqXwZ/N+SC1V9uq9u7ztGoSIAencoZT HcIr2jzWp4EJXm7RX/Pg0FVg6CbPUdGRtMMwdwg/6bIUFAxDrUN2+y7sOvHMbh4VuuZkVU13BpdRY f3PyBvig==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAV-001SZg-64; Fri, 25 Aug 2023 20:12:31 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 02/15] ceph: Convert ceph_page_mkwrite() to use a folio Date: Fri, 25 Aug 2023 21:12:12 +0100 Message-Id: <20230825201225.348148-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Operate on the entire folio instead of just the page. There was an earlier effort to do this with thp_size(), but it had the exact type confusion between head & tail pages that folios are designed to avoid. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/addr.c | 35 +++++++++++++++++------------------ 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 9a0a79833eb0..7c7dfcd63cd1 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1677,8 +1677,8 @@ static vm_fault_t ceph_page_mkwrite(struct vm_fault *vmf) struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_file_info *fi = vma->vm_file->private_data; struct ceph_cap_flush *prealloc_cf; - struct page *page = vmf->page; - loff_t off = page_offset(page); + struct folio *folio = page_folio(vmf->page); + loff_t pos = folio_pos(folio); loff_t size = i_size_read(inode); size_t len; int want, got, err; @@ -1695,50 +1695,49 @@ static vm_fault_t ceph_page_mkwrite(struct vm_fault *vmf) sb_start_pagefault(inode->i_sb); ceph_block_sigs(&oldset); - if (off + thp_size(page) <= size) - len = thp_size(page); - else - len = offset_in_thp(page, size); + len = folio_size(folio); + if (pos + folio_size(folio) > size) + len = size - pos; dout("page_mkwrite %p %llx.%llx %llu~%zd getting caps i_size %llu\n", - inode, ceph_vinop(inode), off, len, size); + inode, ceph_vinop(inode), pos, len, size); if (fi->fmode & CEPH_FILE_MODE_LAZY) want = CEPH_CAP_FILE_BUFFER | CEPH_CAP_FILE_LAZYIO; else want = CEPH_CAP_FILE_BUFFER; got = 0; - err = ceph_get_caps(vma->vm_file, CEPH_CAP_FILE_WR, want, off + len, &got); + err = ceph_get_caps(vma->vm_file, CEPH_CAP_FILE_WR, want, pos + len, &got); if (err < 0) goto out_free; dout("page_mkwrite %p %llu~%zd got cap refs on %s\n", - inode, off, len, ceph_cap_string(got)); + inode, pos, len, ceph_cap_string(got)); - /* Update time before taking page lock */ + /* Update time before taking folio lock */ file_update_time(vma->vm_file); inode_inc_iversion_raw(inode); do { struct ceph_snap_context *snapc; - lock_page(page); + folio_lock(folio); - if (page_mkwrite_check_truncate(page, inode) < 0) { - unlock_page(page); + if (folio_mkwrite_check_truncate(folio, inode) < 0) { + folio_unlock(folio); ret = VM_FAULT_NOPAGE; break; } - snapc = ceph_find_incompatible(page); + snapc = ceph_find_incompatible(&folio->page); if (!snapc) { - /* success. we'll keep the page locked. */ - set_page_dirty(page); + /* success. we'll keep the folio locked. */ + folio_mark_dirty(folio); ret = VM_FAULT_LOCKED; break; } - unlock_page(page); + folio_unlock(folio); if (IS_ERR(snapc)) { ret = VM_FAULT_SIGBUS; @@ -1762,7 +1761,7 @@ static vm_fault_t ceph_page_mkwrite(struct vm_fault *vmf) } dout("page_mkwrite %p %llu~%zd dropping cap refs on %s ret %x\n", - inode, off, len, ceph_cap_string(got), ret); + inode, pos, len, ceph_cap_string(got), ret); ceph_put_cap_refs_async(ci, got); out_free: ceph_restore_sigs(&oldset); From patchwork Fri Aug 25 20:12:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A2A8C83EC2 for ; Fri, 25 Aug 2023 20:13:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231126AbjHYUNV (ORCPT ); Fri, 25 Aug 2023 16:13:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230312AbjHYUNE (ORCPT ); Fri, 25 Aug 2023 16:13:04 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68F8B2689; Fri, 25 Aug 2023 13:13:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=NhZK7lLhyk1gRe/kDzCl/6wBwOFUa/J4bfVtj1z1BGk=; b=BfG3iH95OkyxsHjnevHbNmBLJF xvKrvD7QQr1LEPQDNFzwzKYzIJkNROUQfDth4mjt811sQQwUOgPlcfqk6CK+9kVZAtttVOpZJRtRY v6fOi9ODw/OvVxpotx8yF90gmcFNJ8nVwf55iCLkLsvein5Mz1URQi3/7mdHQhjg5E7RWaJs3HQvu YbzSr7krevkEL/4TOAhOIU3s+zlXHuZybB9tDetftvI7q28cuBS4zFIotQG9UV0IdiUYrwbPhxhZs 3ZuRpzHhX5iu8xvadXGlUYdqepYCnpPTCFYEWUSDEgKGzwUegB096fADo08sdd34jUaUJGuGw6F0o FbQKU4tw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAV-001SZi-9V; Fri, 25 Aug 2023 20:12:31 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 03/15] mm: Delete page_mkwrite_check_truncate() Date: Fri, 25 Aug 2023 21:12:13 +0100 Message-Id: <20230825201225.348148-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org All users of this function have been converted to folio_mkwrite_check_truncate(). Remove it. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 28 ---------------------------- 1 file changed, 28 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 351c3b7f93a1..f43a0e05b092 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1491,34 +1491,6 @@ static inline ssize_t folio_mkwrite_check_truncate(struct folio *folio, return offset; } -/** - * page_mkwrite_check_truncate - check if page was truncated - * @page: the page to check - * @inode: the inode to check the page against - * - * Returns the number of bytes in the page up to EOF, - * or -EFAULT if the page was truncated. - */ -static inline int page_mkwrite_check_truncate(struct page *page, - struct inode *inode) -{ - loff_t size = i_size_read(inode); - pgoff_t index = size >> PAGE_SHIFT; - int offset = offset_in_page(size); - - if (page->mapping != inode->i_mapping) - return -EFAULT; - - /* page is wholly inside EOF */ - if (page->index < index) - return PAGE_SIZE; - /* page is wholly past EOF */ - if (page->index > index || !offset) - return -EFAULT; - /* page is partially inside EOF */ - return offset; -} - /** * i_blocks_per_folio - How many blocks fit in this folio. * @inode: The inode which contains the blocks. From patchwork Fri Aug 25 20:12:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366290 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA47BFC616B for ; Fri, 25 Aug 2023 20:13:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230511AbjHYUNT (ORCPT ); Fri, 25 Aug 2023 16:13:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230309AbjHYUM6 (ORCPT ); Fri, 25 Aug 2023 16:12:58 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABE4A2689; Fri, 25 Aug 2023 13:12:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZIXbxbmWrTOUb6mh+hr1l56LloeqSAYd7MvHY9X30AE=; b=PjsxjRjlgunQfDgdcbQe11qeaU JEgmj/e8JGFhVzrLK2w0f5lGIUY4iAyPpdYGwaGHm6Xz+eobMovp2Rw3PYD084ahFz5Ziw1NqPu/5 bC1KFcVY3guT8PwXyOPW5pcEKBhPt4Y1DvzZi/48yeIH5jZ6SUMiLkpG49puz6E3H5k4scS9rNPYu iFo5lHHy5JgmZy7UxvODxsP/8gHF4f4jukuz/cNtZNtNTzqzmTdhORC3eBdhQOt4pVS/jPnxYHOER DD1ExqVHEqc44s55KQ0AgWLRGBW0bTf/0t3etxACeNIRM/6DOvqDHR/G8zohNfX8qZYrXpMbsMFyX lhqFq9xA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAV-001SZk-CS; Fri, 25 Aug 2023 20:12:31 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 04/15] ceph: Add a migrate_folio method Date: Fri, 25 Aug 2023 21:12:14 +0100 Message-Id: <20230825201225.348148-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org The ceph_snap_context is independent of the address of the data, so we can implement folio migration by just removing the ceph_snap_context from the existing folio and attach it to the new one, which is exactly what filemap_migrate_folio() does. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/addr.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 7c7dfcd63cd1..a0a1fac1a0db 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1563,6 +1563,7 @@ const struct address_space_operations ceph_aops = { .invalidate_folio = ceph_invalidate_folio, .release_folio = ceph_release_folio, .direct_IO = noop_direct_IO, + .migrate_folio = filemap_migrate_folio, }; static void ceph_block_sigs(sigset_t *oldset) From patchwork Fri Aug 25 20:12:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366284 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F39A7EE49BC for ; Fri, 25 Aug 2023 20:13:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230486AbjHYUNS (ORCPT ); Fri, 25 Aug 2023 16:13:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230283AbjHYUMu (ORCPT ); Fri, 25 Aug 2023 16:12:50 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3E662689; Fri, 25 Aug 2023 13:12:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=g99Hhk2pP6XTnhJriYN5x3KOeH0Kln5KYpmyrej4TNY=; b=lt+RM7YR+Daq91Y5fUAcC4rbaK oxtEvmA7fCyITfYBnHb3WXy0i14oBPWffh16S7h+kFTvLhSOfPaZLOh7RLp9GkpFoJL8WHKH1jRpb G3Nu9CHWR/RYd1h77NQvlO3fjwDryJKrOQ/kS1Dxu9xJjHHx6KveTByDuU+AaTk2szZalnfNRzZa0 bR8fXDWMNDBCEL16zkgNEYONlgIF+ifpyAg1CAXNxF9LTttdZYTp92uZK6psr1hO5Msv47XBCThRv iu7eeQoMAqCUMs02b1uVr/nYZhFE5VXA+NDNKHpjcC6ST/rbPLIyEaOJsckyzgw95pI8p3IBhAmb2 prEPkwFg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAV-001SZm-FJ; Fri, 25 Aug 2023 20:12:31 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 05/15] ceph: Remove ceph_writepage() Date: Fri, 25 Aug 2023 21:12:15 +0100 Message-Id: <20230825201225.348148-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Now that we have a migrate_folio method, there is no need for a writepage method. All writeback will go through the writepages method instead which is more efficient. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/addr.c | 25 ------------------------- 1 file changed, 25 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index a0a1fac1a0db..785f2983ac0e 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -795,30 +795,6 @@ static int writepage_nounlock(struct page *page, struct writeback_control *wbc) return err; } -static int ceph_writepage(struct page *page, struct writeback_control *wbc) -{ - int err; - struct inode *inode = page->mapping->host; - BUG_ON(!inode); - ihold(inode); - - if (wbc->sync_mode == WB_SYNC_NONE && - ceph_inode_to_client(inode)->write_congested) - return AOP_WRITEPAGE_ACTIVATE; - - wait_on_page_fscache(page); - - err = writepage_nounlock(page, wbc); - if (err == -ERESTARTSYS) { - /* direct memory reclaimer was killed by SIGKILL. return 0 - * to prevent caller from setting mapping/page error */ - err = 0; - } - unlock_page(page); - iput(inode); - return err; -} - /* * async writeback completion handler. * @@ -1555,7 +1531,6 @@ static int ceph_write_end(struct file *file, struct address_space *mapping, const struct address_space_operations ceph_aops = { .read_folio = netfs_read_folio, .readahead = netfs_readahead, - .writepage = ceph_writepage, .writepages = ceph_writepages_start, .write_begin = ceph_write_begin, .write_end = ceph_write_end, From patchwork Fri Aug 25 20:12:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366283 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04756EE49B8 for ; Fri, 25 Aug 2023 20:13:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230458AbjHYUNQ (ORCPT ); Fri, 25 Aug 2023 16:13:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230282AbjHYUMt (ORCPT ); Fri, 25 Aug 2023 16:12:49 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E005268A; Fri, 25 Aug 2023 13:12:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jomRhgeRXKv1vsQXdMFLWYPA9/3kP5jRtYIdLxcDt9c=; b=IDto1b8yGyIj2PHG0DycPU8Taw UbueBIsuiN6DeAaFKZyqATDalKxH/5ldxSiGJETOZAXzaPnHEqANa+HQvIfs+W99CCoKx9tKlwjRC P3Xr4eWpMwODJW+cXrDRVcCoz8ULePwU20V9lQ15Rn2gE02FrPNihl/56s9MLGayrFnohcKG+A5CK oC+EZ4yJyaUgOuhA0Q5XYrki8xjMsukibH8acjboC7ov77zRguVKYfqVDGK7K7UJ7pAaKSnFHA8kD JKgzrtrogggGk4aXbzcwt8p8fwbEtyf6wE49fKGXwY/2gZ/i2gp0Dn/Xfh0jDoc1ZOwpKfyos/gO7 KLAm9BWw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAV-001SZo-I9; Fri, 25 Aug 2023 20:12:31 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 06/15] ceph: Convert ceph_find_incompatible() to take a folio Date: Fri, 25 Aug 2023 21:12:16 +0100 Message-Id: <20230825201225.348148-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Both callers alredy have a folio, so pass it in. Use folio->index to identify the folio in debug output rather than the folio pointer. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/addr.c | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 785f2983ac0e..0027906a9257 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1385,23 +1385,22 @@ static int context_is_writeable_or_written(struct inode *inode, /** * ceph_find_incompatible - find an incompatible context and return it - * @page: page being dirtied + * @folio: folio being dirtied * - * We are only allowed to write into/dirty a page if the page is + * We are only allowed to write into/dirty a folio if the folio is * clean, or already dirty within the same snap context. Returns a * conflicting context if there is one, NULL if there isn't, or a * negative error code on other errors. * - * Must be called with page lock held. + * Must be called with folio lock held. */ -static struct ceph_snap_context * -ceph_find_incompatible(struct page *page) +static struct ceph_snap_context *ceph_find_incompatible(struct folio *folio) { - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; struct ceph_inode_info *ci = ceph_inode(inode); if (ceph_inode_is_shutdown(inode)) { - dout(" page %p %llx:%llx is shutdown\n", page, + dout(" folio %ld %llx:%llx is shutdown\n", folio->index, ceph_vinop(inode)); return ERR_PTR(-ESTALE); } @@ -1409,29 +1408,31 @@ ceph_find_incompatible(struct page *page) for (;;) { struct ceph_snap_context *snapc, *oldest; - wait_on_page_writeback(page); + folio_wait_writeback(folio); - snapc = page_snap_context(page); + snapc = folio->private; if (!snapc || snapc == ci->i_head_snapc) break; /* - * this page is already dirty in another (older) snap + * this folio is already dirty in another (older) snap * context! is it writeable now? */ oldest = get_oldest_context(inode, NULL, NULL); if (snapc->seq > oldest->seq) { /* not writeable -- return it for the caller to deal with */ ceph_put_snap_context(oldest); - dout(" page %p snapc %p not current or oldest\n", page, snapc); + dout(" folio %ld snapc %p not current or oldest\n", + folio->index, snapc); return ceph_get_snap_context(snapc); } ceph_put_snap_context(oldest); - /* yay, writeable, do it now (without dropping page lock) */ - dout(" page %p snapc %p not current, but oldest\n", page, snapc); - if (clear_page_dirty_for_io(page)) { - int r = writepage_nounlock(page, NULL); + /* yay, writeable, do it now (without dropping folio lock) */ + dout(" folio %ld snapc %p not current, but oldest\n", + folio->index, snapc); + if (folio_clear_dirty_for_io(folio)) { + int r = writepage_nounlock(&folio->page, NULL); if (r < 0) return ERR_PTR(r); } @@ -1446,7 +1447,7 @@ static int ceph_netfs_check_write_begin(struct file *file, loff_t pos, unsigned struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_snap_context *snapc; - snapc = ceph_find_incompatible(folio_page(*foliop, 0)); + snapc = ceph_find_incompatible(*foliop); if (snapc) { int r; @@ -1705,7 +1706,7 @@ static vm_fault_t ceph_page_mkwrite(struct vm_fault *vmf) break; } - snapc = ceph_find_incompatible(&folio->page); + snapc = ceph_find_incompatible(folio); if (!snapc) { /* success. we'll keep the folio locked. */ folio_mark_dirty(folio); From patchwork Fri Aug 25 20:12:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AB93EE49A6 for ; Fri, 25 Aug 2023 20:13:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230322AbjHYUNM (ORCPT ); Fri, 25 Aug 2023 16:13:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230017AbjHYUMl (ORCPT ); Fri, 25 Aug 2023 16:12:41 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0989F2690; Fri, 25 Aug 2023 13:12:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=NZROqDnVo1AJOEsHXhdtifpOHoAwSMxJtl59fuVOhak=; b=hm+L48Vlc8aQqrk1DFeY90KUV2 dbyylE9Q0Qb8wAkja7c4lAOhhv0i0WE4qsFyeCWgM9NWNC2VV9eEjRXQyor9McgjJy4+AwH6vzsHk qXfIeHNACohuJOhuSPwTHD+vV90vwvmVMyj7jFYOFi9mIdtE1VwK6p3FokxOS+T6NIJ8fXosvCPSG uVMSk2LbY0EeDHDRQuTOKen3exs6gzLfWfoObY5Jct3Pzt/5tkJQ42cgjRBbOx4DQC8YEm3VySLDa 60sMO25wWosnBtzESg6kY1jHYs3cArHvxPNmoi0ZnH2s12kFDCblMReFcFnERer2cE17cr1tetbNV AhDGWRxg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAV-001SZq-Ks; Fri, 25 Aug 2023 20:12:31 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 07/15] ceph: Convert writepage_nounlock() to take a folio Date: Fri, 25 Aug 2023 21:12:17 +0100 Message-Id: <20230825201225.348148-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Remove the use of a lot of old APIs and use folio->index to identify folios in debug output. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/addr.c | 66 +++++++++++++++++++++++++------------------------- 1 file changed, 33 insertions(+), 33 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 0027906a9257..02caf10d43ed 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -648,52 +648,52 @@ static u64 get_writepages_data_length(struct inode *inode, } /* - * Write a single page, but leave the page locked. + * Write a single folio, but leave the folio locked. * * If we get a write error, mark the mapping for error, but still adjust the - * dirty page accounting (i.e., page is no longer dirty). + * dirty page accounting (i.e., folio is no longer dirty). */ -static int writepage_nounlock(struct page *page, struct writeback_control *wbc) +static int writepage_nounlock(struct folio *folio, struct writeback_control *wbc) { - struct folio *folio = page_folio(page); - struct inode *inode = page->mapping->host; + struct inode *inode = folio->mapping->host; struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_fs_client *fsc = ceph_inode_to_client(inode); struct ceph_snap_context *snapc, *oldest; - loff_t page_off = page_offset(page); + loff_t page_off = folio_pos(folio); int err; - loff_t len = thp_size(page); + loff_t len = folio_size(folio); loff_t wlen; struct ceph_writeback_ctl ceph_wbc; struct ceph_osd_client *osdc = &fsc->client->osdc; struct ceph_osd_request *req; bool caching = ceph_is_cache_enabled(inode); + struct page *page = &folio->page; struct page *bounce_page = NULL; - dout("writepage %p idx %lu\n", page, page->index); + dout("writepage %lu\n", folio->index); if (ceph_inode_is_shutdown(inode)) return -EIO; /* verify this is a writeable snap context */ - snapc = page_snap_context(page); + snapc = folio->private; if (!snapc) { - dout("writepage %p page %p not dirty?\n", inode, page); + dout("writepage %p folio %lu not dirty?\n", inode, folio->index); return 0; } oldest = get_oldest_context(inode, &ceph_wbc, snapc); if (snapc->seq > oldest->seq) { - dout("writepage %p page %p snapc %p not writeable - noop\n", - inode, page, snapc); + dout("writepage %p folio %lu snapc %p not writeable - noop\n", + inode, folio->index, snapc); /* we should only noop if called by kswapd */ WARN_ON(!(current->flags & PF_MEMALLOC)); ceph_put_snap_context(oldest); - redirty_page_for_writepage(wbc, page); + folio_redirty_for_writepage(wbc, folio); return 0; } ceph_put_snap_context(oldest); - /* is this a partial page at end of file? */ + /* is this a partial folio at end of file? */ if (page_off >= ceph_wbc.i_size) { dout("folio at %lu beyond eof %llu\n", folio->index, ceph_wbc.i_size); @@ -705,8 +705,8 @@ static int writepage_nounlock(struct page *page, struct writeback_control *wbc) len = ceph_wbc.i_size - page_off; wlen = IS_ENCRYPTED(inode) ? round_up(len, CEPH_FSCRYPT_BLOCK_SIZE) : len; - dout("writepage %p page %p index %lu on %llu~%llu snapc %p seq %lld\n", - inode, page, page->index, page_off, wlen, snapc, snapc->seq); + dout("writepage %p folio %lu on %llu~%llu snapc %p seq %lld\n", + inode, folio->index, page_off, wlen, snapc, snapc->seq); if (atomic_long_inc_return(&fsc->writeback_count) > CONGESTION_ON_THRESH(fsc->mount_options->congestion_kb)) @@ -718,32 +718,32 @@ static int writepage_nounlock(struct page *page, struct writeback_control *wbc) ceph_wbc.truncate_seq, ceph_wbc.truncate_size, true); if (IS_ERR(req)) { - redirty_page_for_writepage(wbc, page); + folio_redirty_for_writepage(wbc, folio); return PTR_ERR(req); } if (wlen < len) len = wlen; - set_page_writeback(page); + folio_start_writeback(folio); if (caching) - ceph_set_page_fscache(page); + ceph_set_page_fscache(&folio->page); ceph_fscache_write_to_cache(inode, page_off, len, caching); if (IS_ENCRYPTED(inode)) { - bounce_page = fscrypt_encrypt_pagecache_blocks(page, + bounce_page = fscrypt_encrypt_pagecache_blocks(&folio->page, CEPH_FSCRYPT_BLOCK_SIZE, 0, GFP_NOFS); if (IS_ERR(bounce_page)) { - redirty_page_for_writepage(wbc, page); - end_page_writeback(page); + folio_redirty_for_writepage(wbc, folio); + folio_end_writeback(folio); ceph_osdc_put_request(req); return PTR_ERR(bounce_page); } } /* it may be a short write due to an object boundary */ - WARN_ON_ONCE(len > thp_size(page)); + WARN_ON_ONCE(len > folio_size(folio)); osd_req_op_extent_osd_data_pages(req, 0, bounce_page ? &bounce_page : &page, wlen, 0, false, false); @@ -767,26 +767,26 @@ static int writepage_nounlock(struct page *page, struct writeback_control *wbc) wbc = &tmp_wbc; if (err == -ERESTARTSYS) { /* killed by SIGKILL */ - dout("writepage interrupted page %p\n", page); - redirty_page_for_writepage(wbc, page); - end_page_writeback(page); + dout("writepage interrupted folio %lu\n", folio->index); + folio_redirty_for_writepage(wbc, folio); + folio_end_writeback(folio); return err; } if (err == -EBLOCKLISTED) fsc->blocklisted = true; - dout("writepage setting page/mapping error %d %p\n", - err, page); + dout("writepage setting folio/mapping error %d %lu\n", + err, folio->index); mapping_set_error(&inode->i_data, err); wbc->pages_skipped++; } else { - dout("writepage cleaned page %p\n", page); + dout("writepage cleaned folio %lu\n", folio->index); err = 0; /* vfs expects us to return 0 */ } - oldest = detach_page_private(page); + oldest = folio_detach_private(folio); WARN_ON_ONCE(oldest != snapc); - end_page_writeback(page); + folio_end_writeback(folio); ceph_put_wrbuffer_cap_refs(ci, 1, snapc); - ceph_put_snap_context(snapc); /* page's reference */ + ceph_put_snap_context(snapc); /* folio's reference */ if (atomic_long_dec_return(&fsc->writeback_count) < CONGESTION_OFF_THRESH(fsc->mount_options->congestion_kb)) @@ -1432,7 +1432,7 @@ static struct ceph_snap_context *ceph_find_incompatible(struct folio *folio) dout(" folio %ld snapc %p not current, but oldest\n", folio->index, snapc); if (folio_clear_dirty_for_io(folio)) { - int r = writepage_nounlock(&folio->page, NULL); + int r = writepage_nounlock(folio, NULL); if (r < 0) return ERR_PTR(r); } From patchwork Fri Aug 25 20:12:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366287 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F9B9EE49BE for ; Fri, 25 Aug 2023 20:13:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230498AbjHYUNS (ORCPT ); Fri, 25 Aug 2023 16:13:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230288AbjHYUMz (ORCPT ); Fri, 25 Aug 2023 16:12:55 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C28C52689; Fri, 25 Aug 2023 13:12:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=mO3HHnY77VFfHTGH36PqtwaRQYcAVHn1CQVQhem4mxs=; b=qS4UCDeqElyCb2OHi6eUCdf292 tSXMUPdi1NBE5Pnenj0CQ2UZd0i0CginB4VphyEjoctBjN76eqG+QgzaaiG9V7V96iuUd7NVpU1No I9xC2LM8uWHdcjQ+KNvMtlnpzLWLp3rRlSGY2N+bZq3WbBjZUp7Gdm4EJnaBZ3OolRBAAxsh0CXgA Lbv78nTmn1pwfT10A7l8cX6SPBj5A6kYIGrrKmkom8A6IhFMXJqF49rG02UKMPV/aHIfByUYYg9hU CtVQT9ycwZwi6msRnp5E41Yrrz6vfNClyJBbqgK+YCA1E6i/zGM8L8XfMz7rVNNdUpzk+tzcthYfN /4BxjvCQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAV-001SZs-No; Fri, 25 Aug 2023 20:12:31 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 08/15] ceph: Convert writepages_finish() to use a folio Date: Fri, 25 Aug 2023 21:12:18 +0100 Message-Id: <20230825201225.348148-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Remove several implicit calls to compound_head(). Remove the BUG_ON(!page) as it won't help debug any crashes (the WARN_ON will crash in a distinctive way). Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/addr.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 02caf10d43ed..765b37db2729 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -806,7 +806,6 @@ static void writepages_finish(struct ceph_osd_request *req) struct inode *inode = req->r_inode; struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_osd_data *osd_data; - struct page *page; int num_pages, total_pages = 0; int i, j; int rc = req->r_result; @@ -850,29 +849,28 @@ static void writepages_finish(struct ceph_osd_request *req) (u64)osd_data->length); total_pages += num_pages; for (j = 0; j < num_pages; j++) { - page = osd_data->pages[j]; - if (fscrypt_is_bounce_page(page)) { - page = fscrypt_pagecache_page(page); + struct folio *folio = page_folio(osd_data->pages[j]); + if (fscrypt_is_bounce_folio(folio)) { + folio = fscrypt_pagecache_folio(folio); fscrypt_free_bounce_page(osd_data->pages[j]); - osd_data->pages[j] = page; + osd_data->pages[j] = &folio->page; } - BUG_ON(!page); - WARN_ON(!PageUptodate(page)); + WARN_ON(!folio_test_uptodate(folio)); if (atomic_long_dec_return(&fsc->writeback_count) < CONGESTION_OFF_THRESH( fsc->mount_options->congestion_kb)) fsc->write_congested = false; - ceph_put_snap_context(detach_page_private(page)); - end_page_writeback(page); - dout("unlocking %p\n", page); + ceph_put_snap_context(folio_detach_private(folio)); + folio_end_writeback(folio); + dout("unlocking %lu\n", folio->index); if (remove_page) generic_error_remove_page(inode->i_mapping, - page); + &folio->page); - unlock_page(page); + folio_unlock(folio); } dout("writepages_finish %p wrote %llu bytes cleaned %d pages\n", inode, osd_data->length, rc >= 0 ? num_pages : 0); From patchwork Fri Aug 25 20:12:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8134BC83EC5 for ; Fri, 25 Aug 2023 20:13:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231135AbjHYUNV (ORCPT ); Fri, 25 Aug 2023 16:13:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230313AbjHYUNF (ORCPT ); Fri, 25 Aug 2023 16:13:05 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8393C2689; Fri, 25 Aug 2023 13:13:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dfPpMfT3npKcuIPDMg3T+cilMd95HrxU5W9P9LLusR8=; b=dK2IXa0WnxfkxKPz1IUR3RfKVO VdCaZiOH+iSafL7SHw4SfCTmiDtvTx1g9OWjN7/K185D3gORpdSBGAHsSj5Md/ahto23OUtSWw3/w niisieIEhpPUJLwQL7xyznQP1VA7F/E+JbHRDJOUiLqacYM8zfTelFZR7nCNz4SiHfukxnslHfHjA h9ubXySZwNnqKCp3ksA0Pf7X5RF/HvWSIy5mHE/Ye8CQW45BOz35akqeWGxqEF9xOITjY6bk+12Dk dGbBOQlhlnxbCLC8F2uBN9fDESTH0ZJzd8+XYo3JRa5MyuHqJUOLq1z1bP9WAtd0p6RJ28EDFSlaG 4+GKW+yQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAV-001SZu-QY; Fri, 25 Aug 2023 20:12:31 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 09/15] ceph: Use a folio in ceph_filemap_fault() Date: Fri, 25 Aug 2023 21:12:19 +0100 Message-Id: <20230825201225.348148-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org This leg of the function is concerned with inline data, so we know it's at index 0 and contains only a single page. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/addr.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 765b37db2729..1812c3e6e64f 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1608,29 +1608,30 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf) ret = VM_FAULT_SIGBUS; } else { struct address_space *mapping = inode->i_mapping; - struct page *page; + struct folio *folio; filemap_invalidate_lock_shared(mapping); - page = find_or_create_page(mapping, 0, + folio = __filemap_get_folio(mapping, 0, + FGP_LOCK|FGP_ACCESSED|FGP_CREAT, mapping_gfp_constraint(mapping, ~__GFP_FS)); - if (!page) { + if (!folio) { ret = VM_FAULT_OOM; goto out_inline; } - err = __ceph_do_getattr(inode, page, + err = __ceph_do_getattr(inode, &folio->page, CEPH_STAT_CAP_INLINE_DATA, true); if (err < 0 || off >= i_size_read(inode)) { - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); ret = vmf_error(err); goto out_inline; } if (err < PAGE_SIZE) - zero_user_segment(page, err, PAGE_SIZE); + folio_zero_segment(folio, err, folio_size(folio)); else - flush_dcache_page(page); - SetPageUptodate(page); - vmf->page = page; + flush_dcache_folio(folio); + folio_mark_uptodate(folio); + vmf->page = folio_page(folio, 0); ret = VM_FAULT_MAJOR | VM_FAULT_LOCKED; out_inline: filemap_invalidate_unlock_shared(mapping); From patchwork Fri Aug 25 20:12:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366280 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F4E9EE49AD for ; Fri, 25 Aug 2023 20:13:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230375AbjHYUNN (ORCPT ); Fri, 25 Aug 2023 16:13:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229947AbjHYUMk (ORCPT ); Fri, 25 Aug 2023 16:12:40 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06B23268A; Fri, 25 Aug 2023 13:12:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=fxy1kaAeEWtQm4Qa6KusH4MEhe1DgzyM9H3+GJSje7M=; b=OMTu+407vpy2Ju6kymNy+JhTEe Uy0vM00JXPoqmocEoS4OW9I3bCSHv0j4pCCsAUfoB1/cnEAf5fahgRUB3UBAYfXmjhqF8RRhN2b3m yprDZonXPHdszxqIzxtWfDIN3Khdv9jmsLlhrWb56cat+uuowFBoYZesi99KteJ9nQx0q2qWBMum4 7U89OQNf8BTm5NWiD+wgS9/dPa3BDJ5eZ8Oo5On0b6EZrkSssJQwETJVniMjXdRnFys+Flk70aQxE l9J6/YOEIqZLBAIG/psH+SbrG87BiiNelOckCnF1rvVykENQlDsfSy7IlBzQRCsEspzXgVEV8aI4U HOMTL7Mg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAV-001SZw-TO; Fri, 25 Aug 2023 20:12:31 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 10/15] ceph: Convert ceph_read_iter() to use a folio to read inline data Date: Fri, 25 Aug 2023 21:12:20 +0100 Message-Id: <20230825201225.348148-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Use the folio APIs instead of the page APIs. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/file.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/fs/ceph/file.c b/fs/ceph/file.c index b1da02f5dbe3..5c4f763b1304 100644 --- a/fs/ceph/file.c +++ b/fs/ceph/file.c @@ -2083,19 +2083,19 @@ static ssize_t ceph_read_iter(struct kiocb *iocb, struct iov_iter *to) if (retry_op > HAVE_RETRIED && ret >= 0) { int statret; - struct page *page = NULL; + struct folio *folio = NULL; loff_t i_size; if (retry_op == READ_INLINE) { - page = __page_cache_alloc(GFP_KERNEL); - if (!page) + folio = filemap_alloc_folio(GFP_KERNEL, 0); + if (!folio) return -ENOMEM; } - statret = __ceph_do_getattr(inode, page, - CEPH_STAT_CAP_INLINE_DATA, !!page); + statret = __ceph_do_getattr(inode, &folio->page, + CEPH_STAT_CAP_INLINE_DATA, !!folio); if (statret < 0) { - if (page) - __free_page(page); + if (folio) + folio_put(folio); if (statret == -ENODATA) { BUG_ON(retry_op != READ_INLINE); goto again; @@ -2112,8 +2112,8 @@ static ssize_t ceph_read_iter(struct kiocb *iocb, struct iov_iter *to) iocb->ki_pos + len); end = min_t(loff_t, end, PAGE_SIZE); if (statret < end) - zero_user_segment(page, statret, end); - ret = copy_page_to_iter(page, + folio_zero_segment(folio, statret, end); + ret = copy_folio_to_iter(folio, iocb->ki_pos & ~PAGE_MASK, end - iocb->ki_pos, to); iocb->ki_pos += ret; @@ -2126,7 +2126,7 @@ static ssize_t ceph_read_iter(struct kiocb *iocb, struct iov_iter *to) iocb->ki_pos += ret; read += ret; } - __free_pages(page, 0); + folio_put(folio); return read; } From patchwork Fri Aug 25 20:12:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366282 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3E6AEE49B6 for ; Fri, 25 Aug 2023 20:13:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230412AbjHYUNO (ORCPT ); Fri, 25 Aug 2023 16:13:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230232AbjHYUMo (ORCPT ); Fri, 25 Aug 2023 16:12:44 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AC62268A; Fri, 25 Aug 2023 13:12:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=A01TE489ckQP9O12cr6XeHZtnw6Y2F2mnpUztYb0dK8=; b=InTD+PckuTsmPz9rB/jBiJHJsl CFTmebln5vVjMB+j1Y8ydtnKkDJESeQm1yxS21iAvZg6PlCKgHjsoIu8lP5C6ojqEoaea0hBCVdEu UPr6VvMqzW9WjsV4ll29xsvSFhBsJmsNIrUJxcRiLZtMAuY1BaKzmjSXhIde7mZqK1Cg0VmjwOvq9 y5aiKBHiAawwmmIdzkxN0f6+nqaBO4k3YJwdiyhET0EgPTdP75dkyMXTe3aSPVVwFn8m4N28wruJQ pBx4ujdYyvjH0LBwcZpxcPFfVqpDEV4iH+ysWzJXEpCZn8WXaNe86u00LlIqehAohBk0u1XkC6IVw pMrAm5Yg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAW-001SaB-1g; Fri, 25 Aug 2023 20:12:32 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 11/15] ceph: Convert __ceph_do_getattr() to take a folio Date: Fri, 25 Aug 2023 21:12:21 +0100 Message-Id: <20230825201225.348148-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Both callers now have a folio, so pass it in. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/addr.c | 2 +- fs/ceph/file.c | 2 +- fs/ceph/inode.c | 6 +++--- fs/ceph/super.h | 4 ++-- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 1812c3e6e64f..09178a8ebbde 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1618,7 +1618,7 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf) ret = VM_FAULT_OOM; goto out_inline; } - err = __ceph_do_getattr(inode, &folio->page, + err = __ceph_do_getattr(inode, folio, CEPH_STAT_CAP_INLINE_DATA, true); if (err < 0 || off >= i_size_read(inode)) { folio_unlock(folio); diff --git a/fs/ceph/file.c b/fs/ceph/file.c index 5c4f763b1304..f4c3cb05b6f1 100644 --- a/fs/ceph/file.c +++ b/fs/ceph/file.c @@ -2091,7 +2091,7 @@ static ssize_t ceph_read_iter(struct kiocb *iocb, struct iov_iter *to) return -ENOMEM; } - statret = __ceph_do_getattr(inode, &folio->page, + statret = __ceph_do_getattr(inode, folio, CEPH_STAT_CAP_INLINE_DATA, !!folio); if (statret < 0) { if (folio) diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c index 800ab7920513..ced036d47b3b 100644 --- a/fs/ceph/inode.c +++ b/fs/ceph/inode.c @@ -2809,7 +2809,7 @@ int ceph_try_to_choose_auth_mds(struct inode *inode, int mask) * Verify that we have a lease on the given mask. If not, * do a getattr against an mds. */ -int __ceph_do_getattr(struct inode *inode, struct page *locked_page, +int __ceph_do_getattr(struct inode *inode, struct folio *locked_folio, int mask, bool force) { struct ceph_fs_client *fsc = ceph_sb_to_client(inode->i_sb); @@ -2836,9 +2836,9 @@ int __ceph_do_getattr(struct inode *inode, struct page *locked_page, ihold(inode); req->r_num_caps = 1; req->r_args.getattr.mask = cpu_to_le32(mask); - req->r_locked_page = locked_page; + req->r_locked_page = &locked_folio->page; err = ceph_mdsc_do_request(mdsc, NULL, req); - if (locked_page && err == 0) { + if (locked_folio && err == 0) { u64 inline_version = req->r_reply_info.targeti.inline_version; if (inline_version == 0) { /* the reply is supposed to contain inline data */ diff --git a/fs/ceph/super.h b/fs/ceph/super.h index 51c7f2b14f6f..3649ac41a626 100644 --- a/fs/ceph/super.h +++ b/fs/ceph/super.h @@ -1081,8 +1081,8 @@ static inline void ceph_queue_flush_snaps(struct inode *inode) } extern int ceph_try_to_choose_auth_mds(struct inode *inode, int mask); -extern int __ceph_do_getattr(struct inode *inode, struct page *locked_page, - int mask, bool force); +int __ceph_do_getattr(struct inode *inode, struct folio *locked_folio, + int mask, bool force); static inline int ceph_do_getattr(struct inode *inode, int mask, bool force) { return __ceph_do_getattr(inode, NULL, mask, force); From patchwork Fri Aug 25 20:12:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366292 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F96CC83EC4 for ; Fri, 25 Aug 2023 20:13:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230334AbjHYUNM (ORCPT ); Fri, 25 Aug 2023 16:13:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229971AbjHYUMl (ORCPT ); Fri, 25 Aug 2023 16:12:41 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68A48268F; Fri, 25 Aug 2023 13:12:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dhiojXSvdUlOX72At1qltK2xt894rQa+8d88bX+5rNI=; b=Zyd55oC4gCM8VKt7zmpBQof+2/ /yfWY58WgBfDfW98Lkdn0Nrn0E65CxE7P0cHS6HJ8VfASlWkg0tqiiAqy7TXBquQynE8rbZ5/iE7I 9Z9FCgHZKE14jBIJA7c++jCduhyUD8jjq2cGKtAUssoI0AN6YtLJSTnKsHjiuqEpAX9nOqg3HI5Io CMoHSTRQY/SVGxBxWobbxr2uEvNXEgfhOWE36hb73VAxbBvk7RhEYmNzBB3qfAo+oBpuutQUqW5Yv kMa3TF/X0unS1v1eOhEOli/rHsgUvxTeY33NrFAFkHB5EV86lEEj15H73sLKPvxr5T+tC14c9fq4n DU19ZCkA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAW-001SaH-5g; Fri, 25 Aug 2023 20:12:32 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 12/15] ceph: Convert ceph_fill_inode() to take a folio Date: Fri, 25 Aug 2023 21:12:22 +0100 Message-Id: <20230825201225.348148-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Its one caller already has a folio, so pass it through req->r_locked_folio into ceph_fill_inode(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/inode.c | 10 +++++----- fs/ceph/mds_client.h | 2 +- fs/ceph/super.h | 2 +- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c index ced036d47b3b..d5f0fe39b92f 100644 --- a/fs/ceph/inode.c +++ b/fs/ceph/inode.c @@ -913,7 +913,7 @@ static int decode_encrypted_symlink(const char *encsym, int symlen, u8 **decsym) * Populate an inode based on info from mds. May be called on new or * existing inodes. */ -int ceph_fill_inode(struct inode *inode, struct page *locked_page, +int ceph_fill_inode(struct inode *inode, struct folio *locked_folio, struct ceph_mds_reply_info_in *iinfo, struct ceph_mds_reply_dirfrag *dirinfo, struct ceph_mds_session *session, int cap_fmode, @@ -1261,7 +1261,7 @@ int ceph_fill_inode(struct inode *inode, struct page *locked_page, int cache_caps = CEPH_CAP_FILE_CACHE | CEPH_CAP_FILE_LAZYIO; ci->i_inline_version = iinfo->inline_version; if (ceph_has_inline_data(ci) && - (locked_page || (info_caps & cache_caps))) + (locked_folio || (info_caps & cache_caps))) fill_inline = true; } @@ -1277,7 +1277,7 @@ int ceph_fill_inode(struct inode *inode, struct page *locked_page, ceph_fscache_register_inode_cookie(inode); if (fill_inline) - ceph_fill_inline_data(inode, locked_page, + ceph_fill_inline_data(inode, &locked_folio->page, iinfo->inline_data, iinfo->inline_len); if (wake) @@ -1596,7 +1596,7 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req) BUG_ON(!req->r_target_inode); in = req->r_target_inode; - err = ceph_fill_inode(in, req->r_locked_page, &rinfo->targeti, + err = ceph_fill_inode(in, req->r_locked_folio, &rinfo->targeti, NULL, session, (!test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags) && !test_bit(CEPH_MDS_R_ASYNC, &req->r_req_flags) && @@ -2836,7 +2836,7 @@ int __ceph_do_getattr(struct inode *inode, struct folio *locked_folio, ihold(inode); req->r_num_caps = 1; req->r_args.getattr.mask = cpu_to_le32(mask); - req->r_locked_page = &locked_folio->page; + req->r_locked_folio = locked_folio; err = ceph_mdsc_do_request(mdsc, NULL, req); if (locked_folio && err == 0) { u64 inline_version = req->r_reply_info.targeti.inline_version; diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h index 1fa0f78b7b79..d2cf2ff9fa66 100644 --- a/fs/ceph/mds_client.h +++ b/fs/ceph/mds_client.h @@ -320,7 +320,7 @@ struct ceph_mds_request { int r_err; u32 r_readdir_offset; - struct page *r_locked_page; + struct folio *r_locked_folio; int r_dir_caps; int r_num_caps; diff --git a/fs/ceph/super.h b/fs/ceph/super.h index 3649ac41a626..d741a9d15f52 100644 --- a/fs/ceph/super.h +++ b/fs/ceph/super.h @@ -1038,7 +1038,7 @@ extern void ceph_fill_file_time(struct inode *inode, int issued, u64 time_warp_seq, struct timespec64 *ctime, struct timespec64 *mtime, struct timespec64 *atime); -extern int ceph_fill_inode(struct inode *inode, struct page *locked_page, +int ceph_fill_inode(struct inode *inode, struct folio *locked_folio, struct ceph_mds_reply_info_in *iinfo, struct ceph_mds_reply_dirfrag *dirinfo, struct ceph_mds_session *session, int cap_fmode, From patchwork Fri Aug 25 20:12:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366294 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFF7DC83EC6 for ; Fri, 25 Aug 2023 20:13:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230363AbjHYUNN (ORCPT ); Fri, 25 Aug 2023 16:13:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229607AbjHYUMk (ORCPT ); Fri, 25 Aug 2023 16:12:40 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 022F22689; Fri, 25 Aug 2023 13:12:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=A9e9J+dnjCMMc42iX3zzfqVkWCVEH8VUZ/HYaNwAhhc=; b=CagZXzLXuThc8baN2nwM8XfRJN YbdQX6dtKhmbFS05r0NgijXYb3fAT98IPpHszGOlgO88v4SswgP9vHxD36boL3hsVFdBxD+zLH3gb /67Ng23CL7XEAfPnlWPlgps895UvfIevcOvGEOg7soNsWbkYewo8VGjLelwJFFAG747jIiDqmVfwF qYWEfcDLaEGZvcpjhCZ/kDyAm0nROWwy0TcDKJtmlxkiUMPHJ1VpGwhb5p8qsSr379zeknq9JR420 SlkWcFE3YxORScNkSuTJnNiMyEtjyY5ioXZE3HianNXgqgXv7pCcxZEfVtR2pg7KV/RY0pxpQ5Np1 QJJMM5Hw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAW-001SaN-9p; Fri, 25 Aug 2023 20:12:32 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 13/15] ceph: Convert ceph_fill_inline_data() to take a folio Date: Fri, 25 Aug 2023 21:12:23 +0100 Message-Id: <20230825201225.348148-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Its one caller now has a folio, so use the folio API within this function. Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/addr.c | 44 ++++++++++++++++++++------------------------ fs/ceph/inode.c | 2 +- fs/ceph/super.h | 2 +- 3 files changed, 22 insertions(+), 26 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 09178a8ebbde..79d8f2fddd49 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1748,47 +1748,43 @@ static vm_fault_t ceph_page_mkwrite(struct vm_fault *vmf) return ret; } -void ceph_fill_inline_data(struct inode *inode, struct page *locked_page, +void ceph_fill_inline_data(struct inode *inode, struct folio *locked_folio, char *data, size_t len) { struct address_space *mapping = inode->i_mapping; - struct page *page; + struct folio *folio; - if (locked_page) { - page = locked_page; + if (locked_folio) { + folio = locked_folio; } else { if (i_size_read(inode) == 0) return; - page = find_or_create_page(mapping, 0, - mapping_gfp_constraint(mapping, - ~__GFP_FS)); - if (!page) + folio = __filemap_get_folio(mapping, 0, + FGP_LOCK | FGP_ACCESSED | FGP_CREAT, + mapping_gfp_constraint(mapping, ~__GFP_FS)); + if (IS_ERR(folio)) return; - if (PageUptodate(page)) { - unlock_page(page); - put_page(page); + if (folio_test_uptodate(folio)) { + folio_unlock(folio); + folio_put(folio); return; } } - dout("fill_inline_data %p %llx.%llx len %zu locked_page %p\n", - inode, ceph_vinop(inode), len, locked_page); + dout("fill_inline_data %p %llx.%llx len %zu locked_folio %lu\n", + inode, ceph_vinop(inode), len, locked_folio->index); - if (len > 0) { - void *kaddr = kmap_atomic(page); - memcpy(kaddr, data, len); - kunmap_atomic(kaddr); - } + memcpy_to_folio(folio, 0, data, len); - if (page != locked_page) { + if (folio != locked_folio) { if (len < PAGE_SIZE) - zero_user_segment(page, len, PAGE_SIZE); + folio_zero_segment(folio, len, PAGE_SIZE); else - flush_dcache_page(page); + flush_dcache_folio(folio); - SetPageUptodate(page); - unlock_page(page); - put_page(page); + folio_mark_uptodate(folio); + folio_unlock(folio); + folio_put(folio); } } diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c index d5f0fe39b92f..70f7f68ba078 100644 --- a/fs/ceph/inode.c +++ b/fs/ceph/inode.c @@ -1277,7 +1277,7 @@ int ceph_fill_inode(struct inode *inode, struct folio *locked_folio, ceph_fscache_register_inode_cookie(inode); if (fill_inline) - ceph_fill_inline_data(inode, &locked_folio->page, + ceph_fill_inline_data(inode, locked_folio, iinfo->inline_data, iinfo->inline_len); if (wake) diff --git a/fs/ceph/super.h b/fs/ceph/super.h index d741a9d15f52..a986928c3000 100644 --- a/fs/ceph/super.h +++ b/fs/ceph/super.h @@ -1311,7 +1311,7 @@ extern ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos, struct iov_iter *to, int *retry_op, u64 *last_objver); extern int ceph_release(struct inode *inode, struct file *filp); -extern void ceph_fill_inline_data(struct inode *inode, struct page *locked_page, +void ceph_fill_inline_data(struct inode *inode, struct folio *locked_folio, char *data, size_t len); /* dir.c */ From patchwork Fri Aug 25 20:12:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E499EEE49B5 for ; Fri, 25 Aug 2023 20:13:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230384AbjHYUNN (ORCPT ); Fri, 25 Aug 2023 16:13:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230105AbjHYUMm (ORCPT ); Fri, 25 Aug 2023 16:12:42 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B61E82689; Fri, 25 Aug 2023 13:12:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kJEWs/zTwR9XqadQ8p9jchKyiNdrOh/HBD/K05G0XYU=; b=rR8PC09HXzfsw2IrkuZaOFYzB/ 35OFFFzugy6N3tIZ4AoXm0pHfRwk7THSDXfr5m4IOLsOIpg0G/vLSRoIdVuxVJrVG5H4rxL6XHrYx +jmaR1WIsyjVgT146QoZPU3/ld/Ig9DiTbd1sU9M6Lh1hKUqlZSLLH+vVDROzI3/qSBmR1n1r2VBH FP/AJn4fwRNK5F4fR2BjBdJV4kM1Bdke5pggefqXPxjrVvWKuBQNMBp79hsUfCSsbw+y2M9Rw/NXB xwiVjKMijiZHJQQUO5I1Uubg6kmpSb4ICw0QCy9h8NBZKtbg24w5CcpcVZOMM/1aqN7HtjM41E60o 7fjYb2yQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAW-001SaT-E8; Fri, 25 Aug 2023 20:12:32 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 14/15] ceph: Convert ceph_set_page_fscache() to ceph_folio_start_fscache() Date: Fri, 25 Aug 2023 21:12:24 +0100 Message-Id: <20230825201225.348148-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Both callers have the folio, so turn this wrapper into one for folio_start_fscache(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/ceph/addr.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 79d8f2fddd49..c2a81b67fc58 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -514,9 +514,9 @@ const struct netfs_request_ops ceph_netfs_ops = { }; #ifdef CONFIG_CEPH_FSCACHE -static void ceph_set_page_fscache(struct page *page) +static void ceph_folio_start_fscache(struct folio *folio) { - set_page_fscache(page); + folio_start_fscache(folio); } static void ceph_fscache_write_terminated(void *priv, ssize_t error, bool was_async) @@ -536,7 +536,7 @@ static void ceph_fscache_write_to_cache(struct inode *inode, u64 off, u64 len, b ceph_fscache_write_terminated, inode, caching); } #else -static inline void ceph_set_page_fscache(struct page *page) +static inline void ceph_folio_start_fscache(struct folio *folio) { } @@ -727,7 +727,7 @@ static int writepage_nounlock(struct folio *folio, struct writeback_control *wbc folio_start_writeback(folio); if (caching) - ceph_set_page_fscache(&folio->page); + ceph_folio_start_fscache(folio); ceph_fscache_write_to_cache(inode, page_off, len, caching); if (IS_ENCRYPTED(inode)) { @@ -1242,7 +1242,7 @@ static int ceph_writepages_start(struct address_space *mapping, folio_start_writeback(folio); if (caching) - ceph_set_page_fscache(pages[i]); + ceph_folio_start_fscache(folio); len += folio_size(folio); } ceph_fscache_write_to_cache(inode, offset, len, caching); From patchwork Fri Aug 25 20:12:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13366285 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5D86EE49BA for ; Fri, 25 Aug 2023 20:13:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230432AbjHYUNP (ORCPT ); Fri, 25 Aug 2023 16:13:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230212AbjHYUMn (ORCPT ); Fri, 25 Aug 2023 16:12:43 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A10E268F; Fri, 25 Aug 2023 13:12:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ELagD2aOgt6ixf7oF8s2S5SoqYM3wQ/LmYSEHizmfB0=; b=ildkXC1FRkJc6vIhDXN9KIrscJ 5QEhVMgHXKcWTBFya93q4bDdZpA20cs/zgohZqN60FToc44tDTjhVtUe6AvnwoByGOyhVXXotqeFp lx8T3L7lsfuiWqLFh1iEzNQaiwczyDS+F4akGUn7f9BjAdjNUhlpThh9euQ26B8vz+Csqd8Cv5pk8 q3O7n7q/4/vPp44BuWi2jx/hp6hr7Hz0jR1Rh42NyD2QEfVWOEg5Qrm7RiybY9jPrrC9GE7ReT7lh mXrz9w8i7EainOVqqqAIvVwP0ESMT97OlP9+zpfShjjIcuFxqQw5HTi6JMJFOH4sYC3dhXQa+5o8s KcOyqhwQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qZdAW-001SaZ-IU; Fri, 25 Aug 2023 20:12:32 +0000 From: "Matthew Wilcox (Oracle)" To: Xiubo Li , Ilya Dryomov Cc: "Matthew Wilcox (Oracle)" , Jeff Layton , ceph-devel@vger.kernel.org, David Howells , linux-fsdevel@vger.kernel.org Subject: [PATCH 15/15] netfs: Remove unused functions Date: Fri, 25 Aug 2023 21:12:25 +0100 Message-Id: <20230825201225.348148-16-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230825201225.348148-1-willy@infradead.org> References: <20230825201225.348148-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org set_page_fscache(), wait_on_page_fscache() and wait_on_page_fscache_killable() have no more users. Remove them and update the documentation to describe their folio equivalents. Signed-off-by: Matthew Wilcox (Oracle) --- .../filesystems/caching/netfs-api.rst | 30 +++++++++---------- include/linux/netfs.h | 15 ---------- 2 files changed, 15 insertions(+), 30 deletions(-) diff --git a/Documentation/filesystems/caching/netfs-api.rst b/Documentation/filesystems/caching/netfs-api.rst index 665b27f1556e..6285c1433ac5 100644 --- a/Documentation/filesystems/caching/netfs-api.rst +++ b/Documentation/filesystems/caching/netfs-api.rst @@ -374,7 +374,7 @@ Caching of Local Modifications ============================== If a network filesystem has locally modified data that it wants to write to the -cache, it needs to mark the pages to indicate that a write is in progress, and +cache, it needs to mark the folios to indicate that a write is in progress, and if the mark is already present, it needs to wait for it to be removed first (presumably due to an already in-progress operation). This prevents multiple competing DIO writes to the same storage in the cache. @@ -384,14 +384,14 @@ like:: bool caching = fscache_cookie_enabled(cookie); -If caching is to be attempted, pages should be waited for and then marked using +If caching is to be attempted, folios should be waited for and then marked using the following functions provided by the netfs helper library:: - void set_page_fscache(struct page *page); - void wait_on_page_fscache(struct page *page); - int wait_on_page_fscache_killable(struct page *page); + void folio_start_fscache(struct folio *folio); + void folio_wait_fscache(struct folio *folio); + int folio_wait_fscache_killable(struct folio *folio); -Once all the pages in the span are marked, the netfs can ask fscache to +Once all the folios in the span are marked, the netfs can ask fscache to schedule a write of that region:: void fscache_write_to_cache(struct fscache_cookie *cookie, @@ -408,7 +408,7 @@ by calling:: loff_t start, size_t len, bool caching) -In these functions, a pointer to the mapping to which the source pages are +In these functions, a pointer to the mapping to which the source folios are attached is passed in and start and len indicate the size of the region that's going to be written (it doesn't have to align to page boundaries necessarily, but it does have to align to DIO boundaries on the backing filesystem). The @@ -421,29 +421,29 @@ and term_func indicates an optional completion function, to which term_func_priv will be passed, along with the error or amount written. Note that the write function will always run asynchronously and will unmark all -the pages upon completion before calling term_func. +the folios upon completion before calling term_func. -Page Release and Invalidation -============================= +Folio Release and Invalidation +=================================== Fscache keeps track of whether we have any data in the cache yet for a cache object we've just created. It knows it doesn't have to do any reading until it -has done a write and then the page it wrote from has been released by the VM, +has done a write and then the folio it wrote from has been released by the VM, after which it *has* to look in the cache. -To inform fscache that a page might now be in the cache, the following function +To inform fscache that a folio might now be in the cache, the following function should be called from the ``release_folio`` address space op:: void fscache_note_page_release(struct fscache_cookie *cookie); if the page has been released (ie. release_folio returned true). -Page release and page invalidation should also wait for any mark left on the +Folio release and folio invalidation should also wait for any mark left on the page to say that a DIO write is underway from that page:: - void wait_on_page_fscache(struct page *page); - int wait_on_page_fscache_killable(struct page *page); + void folio_wait_fscache(struct folio *folio); + int folio_wait_fscache_killable(struct folio *folio); API Function Reference diff --git a/include/linux/netfs.h b/include/linux/netfs.h index b11a84f6c32b..5e43e7010ff5 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -89,26 +89,11 @@ static inline int folio_wait_fscache_killable(struct folio *folio) return folio_wait_private_2_killable(folio); } -static inline void set_page_fscache(struct page *page) -{ - folio_start_fscache(page_folio(page)); -} - static inline void end_page_fscache(struct page *page) { folio_end_private_2(page_folio(page)); } -static inline void wait_on_page_fscache(struct page *page) -{ - folio_wait_private_2(page_folio(page)); -} - -static inline int wait_on_page_fscache_killable(struct page *page) -{ - return folio_wait_private_2_killable(page_folio(page)); -} - enum netfs_io_source { NETFS_FILL_WITH_ZEROES, NETFS_DOWNLOAD_FROM_SERVER,