Message ID | 20221212191317.9730-1-vishal.moola@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [RFC] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag() | expand |
On 2022/12/13 3:13, Vishal Moola (Oracle) wrote: > Converted the function to use a folio_batch instead of pagevec. This is in > preparation for the removal of find_get_pages_range_tag(). > > Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead > of pagevec. This does NOT support large folios. The function currently > only utilizes folios of size 1 so this shouldn't cause any issues right > now. > > This version of the patch limits the number of pages fetched to > F2FS_ONSTACK_PAGES. If that ever happens, update the start index here > since filemap_get_folios_tag() updates the index to be after the last > found folio, not necessarily the last used page. > > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> > --- > > Let me know if you prefer this version and I'll include it in v5 > of the patch series when I rebase it after the merge window. > > --- > fs/f2fs/data.c | 86 ++++++++++++++++++++++++++++++++++---------------- > 1 file changed, 59 insertions(+), 27 deletions(-) > > diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c > index a71e818cd67b..1703e353f0e0 100644 > --- a/fs/f2fs/data.c > +++ b/fs/f2fs/data.c > @@ -2939,6 +2939,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > int ret = 0; > int done = 0, retry = 0; > struct page *pages[F2FS_ONSTACK_PAGES]; > + struct folio_batch fbatch; > struct f2fs_sb_info *sbi = F2FS_M_SB(mapping); > struct bio *bio = NULL; > sector_t last_block; > @@ -2959,6 +2960,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > .private = NULL, > }; > #endif > + int nr_folios, p, idx; > int nr_pages; > pgoff_t index; > pgoff_t end; /* Inclusive */ > @@ -2969,6 +2971,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > int submitted = 0; > int i; > > + folio_batch_init(&fbatch); > + > if (get_dirty_pages(mapping->host) <= > SM_I(F2FS_M_SB(mapping))->min_hot_blocks) > set_inode_flag(mapping->host, FI_HOT_DATA); > @@ -2994,13 +2998,38 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > tag_pages_for_writeback(mapping, index, end); > done_index = index; > while (!done && !retry && (index <= end)) { > - nr_pages = find_get_pages_range_tag(mapping, &index, end, > - tag, F2FS_ONSTACK_PAGES, pages); > - if (nr_pages == 0) > + nr_pages = 0; > +again: > + nr_folios = filemap_get_folios_tag(mapping, &index, end, > + tag, &fbatch); > + if (nr_folios == 0) { > + if (nr_pages) > + goto write; > break; > + } > > + for (i = 0; i < nr_folios; i++) { > + struct folio* folio = fbatch.folios[i]; > + > + idx = 0; > + p = folio_nr_pages(folio); > +add_more: > + pages[nr_pages] = folio_page(folio,idx); > + folio_ref_inc(folio); It looks if CONFIG_LRU_GEN is not set, folio_ref_inc() does nothing. For those folios recorded in pages array, we need to call folio_get() here to add one more reference on each of them? > + if (++nr_pages == F2FS_ONSTACK_PAGES) { > + index = folio->index + idx + 1; > + folio_batch_release(&fbatch); Otherwise after folio_batch_release(), it may cause use-after-free issue when accessing pages array? Or am I missing something? > + goto write; > + } > + if (++idx < p) > + goto add_more; > + } > + folio_batch_release(&fbatch); > + goto again; > +write: > for (i = 0; i < nr_pages; i++) { > struct page *page = pages[i]; > + struct folio *folio = page_folio(page); > bool need_readd; > readd: > need_readd = false; > @@ -3017,7 +3046,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > } > > if (!f2fs_cluster_can_merge_page(&cc, > - page->index)) { > + folio->index)) { > ret = f2fs_write_multi_pages(&cc, > &submitted, wbc, io_type); > if (!ret) > @@ -3026,27 +3055,28 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > } > > if (unlikely(f2fs_cp_error(sbi))) > - goto lock_page; > + goto lock_folio; > > if (!f2fs_cluster_is_empty(&cc)) > - goto lock_page; > + goto lock_folio; > > if (f2fs_all_cluster_page_ready(&cc, > pages, i, nr_pages, true)) > - goto lock_page; > + goto lock_folio; > > ret2 = f2fs_prepare_compress_overwrite( > inode, &pagep, > - page->index, &fsdata); > + folio->index, &fsdata); > if (ret2 < 0) { > ret = ret2; > done = 1; > break; > } else if (ret2 && > (!f2fs_compress_write_end(inode, > - fsdata, page->index, 1) || > + fsdata, folio->index, 1) || > !f2fs_all_cluster_page_ready(&cc, > - pages, i, nr_pages, false))) { > + pages, i, nr_pages, > + false))) { > retry = 1; > break; > } > @@ -3059,46 +3089,47 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > break; > } > #ifdef CONFIG_F2FS_FS_COMPRESSION > -lock_page: > +lock_folio: > #endif > - done_index = page->index; > + done_index = folio->index; > retry_write: > - lock_page(page); > + folio_lock(folio); > > - if (unlikely(page->mapping != mapping)) { > + if (unlikely(folio->mapping != mapping)) { > continue_unlock: > - unlock_page(page); > + folio_unlock(folio); > continue; > } > > - if (!PageDirty(page)) { > + if (!folio_test_dirty(folio)) { > /* someone wrote it for us */ > goto continue_unlock; > } > > - if (PageWriteback(page)) { > + if (folio_test_writeback(folio)) { > if (wbc->sync_mode != WB_SYNC_NONE) > - f2fs_wait_on_page_writeback(page, > + f2fs_wait_on_page_writeback( > + &folio->page, > DATA, true, true); > else > goto continue_unlock; > } > > - if (!clear_page_dirty_for_io(page)) > + if (!folio_clear_dirty_for_io(folio)) > goto continue_unlock; > > #ifdef CONFIG_F2FS_FS_COMPRESSION > if (f2fs_compressed_file(inode)) { > - get_page(page); > - f2fs_compress_ctx_add_page(&cc, page); > + folio_get(folio); > + f2fs_compress_ctx_add_page(&cc, &folio->page); > continue; > } > #endif > - ret = f2fs_write_single_data_page(page, &submitted, > - &bio, &last_block, wbc, io_type, > - 0, true); > + ret = f2fs_write_single_data_page(&folio->page, > + &submitted, &bio, &last_block, > + wbc, io_type, 0, true); > if (ret == AOP_WRITEPAGE_ACTIVATE) > - unlock_page(page); > + folio_unlock(folio); > #ifdef CONFIG_F2FS_FS_COMPRESSION > result: > #endif > @@ -3122,7 +3153,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > } > goto next; > } > - done_index = page->index + 1; > + done_index = folio->index + > + folio_nr_pages(folio); > done = 1; > break; > } > @@ -3136,7 +3168,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > if (need_readd) > goto readd; > } > - release_pages(pages, nr_pages); > + release_pages(pages,nr_pages); No need to change? Thanks, > cond_resched(); > } > #ifdef CONFIG_F2FS_FS_COMPRESSION
On Thu, Dec 15, 2022 at 09:48:41AM +0800, Chao Yu wrote: > On 2022/12/13 3:13, Vishal Moola (Oracle) wrote: > > +add_more: > > + pages[nr_pages] = folio_page(folio,idx); > > + folio_ref_inc(folio); > > It looks if CONFIG_LRU_GEN is not set, folio_ref_inc() does nothing. For those > folios recorded in pages array, we need to call folio_get() here to add one more > reference on each of them? static inline void folio_get(struct folio *folio) { VM_BUG_ON_FOLIO(folio_ref_zero_or_close_to_overflow(folio), folio); folio_ref_inc(folio); } That said, folio_ref_inct() is very much MM-internal and filesystems should be using folio_get(), so please make that modification in the next revision, Vishal.
On 12/12, Vishal Moola (Oracle) wrote: > Converted the function to use a folio_batch instead of pagevec. This is in > preparation for the removal of find_get_pages_range_tag(). > > Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead > of pagevec. This does NOT support large folios. The function currently > only utilizes folios of size 1 so this shouldn't cause any issues right > now. > > This version of the patch limits the number of pages fetched to > F2FS_ONSTACK_PAGES. If that ever happens, update the start index here > since filemap_get_folios_tag() updates the index to be after the last > found folio, not necessarily the last used page. > > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> > --- > > Let me know if you prefer this version and I'll include it in v5 > of the patch series when I rebase it after the merge window. > > --- > fs/f2fs/data.c | 86 ++++++++++++++++++++++++++++++++++---------------- > 1 file changed, 59 insertions(+), 27 deletions(-) > > diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c > index a71e818cd67b..1703e353f0e0 100644 > --- a/fs/f2fs/data.c > +++ b/fs/f2fs/data.c > @@ -2939,6 +2939,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > int ret = 0; > int done = 0, retry = 0; > struct page *pages[F2FS_ONSTACK_PAGES]; > + struct folio_batch fbatch; > struct f2fs_sb_info *sbi = F2FS_M_SB(mapping); > struct bio *bio = NULL; > sector_t last_block; > @@ -2959,6 +2960,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > .private = NULL, > }; > #endif > + int nr_folios, p, idx; > int nr_pages; > pgoff_t index; > pgoff_t end; /* Inclusive */ > @@ -2969,6 +2971,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > int submitted = 0; > int i; > > + folio_batch_init(&fbatch); > + > if (get_dirty_pages(mapping->host) <= > SM_I(F2FS_M_SB(mapping))->min_hot_blocks) > set_inode_flag(mapping->host, FI_HOT_DATA); > @@ -2994,13 +2998,38 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > tag_pages_for_writeback(mapping, index, end); > done_index = index; > while (!done && !retry && (index <= end)) { > - nr_pages = find_get_pages_range_tag(mapping, &index, end, > - tag, F2FS_ONSTACK_PAGES, pages); > - if (nr_pages == 0) > + nr_pages = 0; > +again: > + nr_folios = filemap_get_folios_tag(mapping, &index, end, > + tag, &fbatch); Can't folio handle this internally with F2FS_ONSTACK_PAGES and pages? > + if (nr_folios == 0) { > + if (nr_pages) > + goto write; > break; > + } > > + for (i = 0; i < nr_folios; i++) { > + struct folio* folio = fbatch.folios[i]; > + > + idx = 0; > + p = folio_nr_pages(folio); > +add_more: > + pages[nr_pages] = folio_page(folio,idx); > + folio_ref_inc(folio); > + if (++nr_pages == F2FS_ONSTACK_PAGES) { > + index = folio->index + idx + 1; > + folio_batch_release(&fbatch); > + goto write; > + } > + if (++idx < p) > + goto add_more; > + } > + folio_batch_release(&fbatch); > + goto again; > +write: > for (i = 0; i < nr_pages; i++) { > struct page *page = pages[i]; > + struct folio *folio = page_folio(page); > bool need_readd; > readd: > need_readd = false; > @@ -3017,7 +3046,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > } > > if (!f2fs_cluster_can_merge_page(&cc, > - page->index)) { > + folio->index)) { > ret = f2fs_write_multi_pages(&cc, > &submitted, wbc, io_type); > if (!ret) > @@ -3026,27 +3055,28 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > } > > if (unlikely(f2fs_cp_error(sbi))) > - goto lock_page; > + goto lock_folio; > > if (!f2fs_cluster_is_empty(&cc)) > - goto lock_page; > + goto lock_folio; > > if (f2fs_all_cluster_page_ready(&cc, > pages, i, nr_pages, true)) > - goto lock_page; > + goto lock_folio; > > ret2 = f2fs_prepare_compress_overwrite( > inode, &pagep, > - page->index, &fsdata); > + folio->index, &fsdata); > if (ret2 < 0) { > ret = ret2; > done = 1; > break; > } else if (ret2 && > (!f2fs_compress_write_end(inode, > - fsdata, page->index, 1) || > + fsdata, folio->index, 1) || > !f2fs_all_cluster_page_ready(&cc, > - pages, i, nr_pages, false))) { > + pages, i, nr_pages, > + false))) { > retry = 1; > break; > } > @@ -3059,46 +3089,47 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > break; > } > #ifdef CONFIG_F2FS_FS_COMPRESSION > -lock_page: > +lock_folio: > #endif > - done_index = page->index; > + done_index = folio->index; > retry_write: > - lock_page(page); > + folio_lock(folio); > > - if (unlikely(page->mapping != mapping)) { > + if (unlikely(folio->mapping != mapping)) { > continue_unlock: > - unlock_page(page); > + folio_unlock(folio); > continue; > } > > - if (!PageDirty(page)) { > + if (!folio_test_dirty(folio)) { > /* someone wrote it for us */ > goto continue_unlock; > } > > - if (PageWriteback(page)) { > + if (folio_test_writeback(folio)) { > if (wbc->sync_mode != WB_SYNC_NONE) > - f2fs_wait_on_page_writeback(page, > + f2fs_wait_on_page_writeback( > + &folio->page, > DATA, true, true); > else > goto continue_unlock; > } > > - if (!clear_page_dirty_for_io(page)) > + if (!folio_clear_dirty_for_io(folio)) > goto continue_unlock; > > #ifdef CONFIG_F2FS_FS_COMPRESSION > if (f2fs_compressed_file(inode)) { > - get_page(page); > - f2fs_compress_ctx_add_page(&cc, page); > + folio_get(folio); > + f2fs_compress_ctx_add_page(&cc, &folio->page); > continue; > } > #endif > - ret = f2fs_write_single_data_page(page, &submitted, > - &bio, &last_block, wbc, io_type, > - 0, true); > + ret = f2fs_write_single_data_page(&folio->page, > + &submitted, &bio, &last_block, > + wbc, io_type, 0, true); > if (ret == AOP_WRITEPAGE_ACTIVATE) > - unlock_page(page); > + folio_unlock(folio); > #ifdef CONFIG_F2FS_FS_COMPRESSION > result: > #endif > @@ -3122,7 +3153,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > } > goto next; > } > - done_index = page->index + 1; > + done_index = folio->index + > + folio_nr_pages(folio); > done = 1; > break; > } > @@ -3136,7 +3168,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > if (need_readd) > goto readd; > } > - release_pages(pages, nr_pages); > + release_pages(pages,nr_pages); > cond_resched(); > } > #ifdef CONFIG_F2FS_FS_COMPRESSION > -- > 2.38.1
On Thu, Dec 15, 2022 at 10:45 AM Matthew Wilcox <willy@infradead.org> wrote: > > On Thu, Dec 15, 2022 at 09:48:41AM +0800, Chao Yu wrote: > > On 2022/12/13 3:13, Vishal Moola (Oracle) wrote: > > > +add_more: > > > + pages[nr_pages] = folio_page(folio,idx); > > > + folio_ref_inc(folio); > > > > It looks if CONFIG_LRU_GEN is not set, folio_ref_inc() does nothing. For those > > folios recorded in pages array, we need to call folio_get() here to add one more > > reference on each of them? > > static inline void folio_get(struct folio *folio) > { > VM_BUG_ON_FOLIO(folio_ref_zero_or_close_to_overflow(folio), folio); > folio_ref_inc(folio); > } > > That said, folio_ref_inct() is very much MM-internal and filesystems > should be using folio_get(), so please make that modification in the > next revision, Vishal. Ok, I'll go through and fix all of those in the next version.
On Wed, Dec 21, 2022 at 09:17:30AM -0800, Vishal Moola wrote: > > That said, folio_ref_inct() is very much MM-internal and filesystems > > should be using folio_get(), so please make that modification in the > > next revision, Vishal. > > Ok, I'll go through and fix all of those in the next version. Btw, something a lot more productive in this area would be to figure out how we could convert all these copy and paste versions of write_cache_pages to use common code. This might need changes to the common code, but the amount of duplicate and poorly maintained versions of this loop is a bit alarming: - btree_write_cache_pages - extent_write_cache_pages - f2fs_write_cache_pages - gfs2_write_cache_jdata
On Thu, Dec 15, 2022 at 11:02:24AM -0800, Jaegeuk Kim wrote: > On 12/12, Vishal Moola (Oracle) wrote: > > @@ -2994,13 +2998,38 @@ static int f2fs_write_cache_pages(struct address_space *mapping, > > tag_pages_for_writeback(mapping, index, end); > > done_index = index; > > while (!done && !retry && (index <= end)) { > > - nr_pages = find_get_pages_range_tag(mapping, &index, end, > > - tag, F2FS_ONSTACK_PAGES, pages); > > - if (nr_pages == 0) > > + nr_pages = 0; > > +again: > > + nr_folios = filemap_get_folios_tag(mapping, &index, end, > > + tag, &fbatch); > > Can't folio handle this internally with F2FS_ONSTACK_PAGES and pages? I really want to discourage filesystems from doing this kind of thing. The folio_batch is the natural size for doing batches of work, and having the consistency across all these APIs of passing in a folio_batch is quite valuable. I understand f2fs wants to get more memory in a single batch, but the right way to do that is to use larger folios.
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index a71e818cd67b..1703e353f0e0 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -2939,6 +2939,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, int ret = 0; int done = 0, retry = 0; struct page *pages[F2FS_ONSTACK_PAGES]; + struct folio_batch fbatch; struct f2fs_sb_info *sbi = F2FS_M_SB(mapping); struct bio *bio = NULL; sector_t last_block; @@ -2959,6 +2960,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, .private = NULL, }; #endif + int nr_folios, p, idx; int nr_pages; pgoff_t index; pgoff_t end; /* Inclusive */ @@ -2969,6 +2971,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping, int submitted = 0; int i; + folio_batch_init(&fbatch); + if (get_dirty_pages(mapping->host) <= SM_I(F2FS_M_SB(mapping))->min_hot_blocks) set_inode_flag(mapping->host, FI_HOT_DATA); @@ -2994,13 +2998,38 @@ static int f2fs_write_cache_pages(struct address_space *mapping, tag_pages_for_writeback(mapping, index, end); done_index = index; while (!done && !retry && (index <= end)) { - nr_pages = find_get_pages_range_tag(mapping, &index, end, - tag, F2FS_ONSTACK_PAGES, pages); - if (nr_pages == 0) + nr_pages = 0; +again: + nr_folios = filemap_get_folios_tag(mapping, &index, end, + tag, &fbatch); + if (nr_folios == 0) { + if (nr_pages) + goto write; break; + } + for (i = 0; i < nr_folios; i++) { + struct folio* folio = fbatch.folios[i]; + + idx = 0; + p = folio_nr_pages(folio); +add_more: + pages[nr_pages] = folio_page(folio,idx); + folio_ref_inc(folio); + if (++nr_pages == F2FS_ONSTACK_PAGES) { + index = folio->index + idx + 1; + folio_batch_release(&fbatch); + goto write; + } + if (++idx < p) + goto add_more; + } + folio_batch_release(&fbatch); + goto again; +write: for (i = 0; i < nr_pages; i++) { struct page *page = pages[i]; + struct folio *folio = page_folio(page); bool need_readd; readd: need_readd = false; @@ -3017,7 +3046,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, } if (!f2fs_cluster_can_merge_page(&cc, - page->index)) { + folio->index)) { ret = f2fs_write_multi_pages(&cc, &submitted, wbc, io_type); if (!ret) @@ -3026,27 +3055,28 @@ static int f2fs_write_cache_pages(struct address_space *mapping, } if (unlikely(f2fs_cp_error(sbi))) - goto lock_page; + goto lock_folio; if (!f2fs_cluster_is_empty(&cc)) - goto lock_page; + goto lock_folio; if (f2fs_all_cluster_page_ready(&cc, pages, i, nr_pages, true)) - goto lock_page; + goto lock_folio; ret2 = f2fs_prepare_compress_overwrite( inode, &pagep, - page->index, &fsdata); + folio->index, &fsdata); if (ret2 < 0) { ret = ret2; done = 1; break; } else if (ret2 && (!f2fs_compress_write_end(inode, - fsdata, page->index, 1) || + fsdata, folio->index, 1) || !f2fs_all_cluster_page_ready(&cc, - pages, i, nr_pages, false))) { + pages, i, nr_pages, + false))) { retry = 1; break; } @@ -3059,46 +3089,47 @@ static int f2fs_write_cache_pages(struct address_space *mapping, break; } #ifdef CONFIG_F2FS_FS_COMPRESSION -lock_page: +lock_folio: #endif - done_index = page->index; + done_index = folio->index; retry_write: - lock_page(page); + folio_lock(folio); - if (unlikely(page->mapping != mapping)) { + if (unlikely(folio->mapping != mapping)) { continue_unlock: - unlock_page(page); + folio_unlock(folio); continue; } - if (!PageDirty(page)) { + if (!folio_test_dirty(folio)) { /* someone wrote it for us */ goto continue_unlock; } - if (PageWriteback(page)) { + if (folio_test_writeback(folio)) { if (wbc->sync_mode != WB_SYNC_NONE) - f2fs_wait_on_page_writeback(page, + f2fs_wait_on_page_writeback( + &folio->page, DATA, true, true); else goto continue_unlock; } - if (!clear_page_dirty_for_io(page)) + if (!folio_clear_dirty_for_io(folio)) goto continue_unlock; #ifdef CONFIG_F2FS_FS_COMPRESSION if (f2fs_compressed_file(inode)) { - get_page(page); - f2fs_compress_ctx_add_page(&cc, page); + folio_get(folio); + f2fs_compress_ctx_add_page(&cc, &folio->page); continue; } #endif - ret = f2fs_write_single_data_page(page, &submitted, - &bio, &last_block, wbc, io_type, - 0, true); + ret = f2fs_write_single_data_page(&folio->page, + &submitted, &bio, &last_block, + wbc, io_type, 0, true); if (ret == AOP_WRITEPAGE_ACTIVATE) - unlock_page(page); + folio_unlock(folio); #ifdef CONFIG_F2FS_FS_COMPRESSION result: #endif @@ -3122,7 +3153,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping, } goto next; } - done_index = page->index + 1; + done_index = folio->index + + folio_nr_pages(folio); done = 1; break; } @@ -3136,7 +3168,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, if (need_readd) goto readd; } - release_pages(pages, nr_pages); + release_pages(pages,nr_pages); cond_resched(); } #ifdef CONFIG_F2FS_FS_COMPRESSION
Converted the function to use a folio_batch instead of pagevec. This is in preparation for the removal of find_get_pages_range_tag(). Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead of pagevec. This does NOT support large folios. The function currently only utilizes folios of size 1 so this shouldn't cause any issues right now. This version of the patch limits the number of pages fetched to F2FS_ONSTACK_PAGES. If that ever happens, update the start index here since filemap_get_folios_tag() updates the index to be after the last found folio, not necessarily the last used page. Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> --- Let me know if you prefer this version and I'll include it in v5 of the patch series when I rebase it after the merge window. --- fs/f2fs/data.c | 86 ++++++++++++++++++++++++++++++++++---------------- 1 file changed, 59 insertions(+), 27 deletions(-)