Message ID | 20230915183848.1018717-9-kernel@pankajraghav.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Enable block size > page size in XFS | expand |
On Fri, Sep 15, 2023 at 08:38:33PM +0200, Pankaj Raghav wrote: > From: Luis Chamberlain <mcgrof@kernel.org> > > Align the index to the mapping_min_order number of pages while setting > the XA_STATE in filemap_get_folios_tag(). ... because? It should already search backwards in the page cache, otherwise calling sync_file_range() would skip the start if it landed in a tail page of a folio.
On Fri, Sep 15, 2023 at 08:50:59PM +0100, Matthew Wilcox wrote: > On Fri, Sep 15, 2023 at 08:38:33PM +0200, Pankaj Raghav wrote: > > From: Luis Chamberlain <mcgrof@kernel.org> > > > > Align the index to the mapping_min_order number of pages while setting > > the XA_STATE in filemap_get_folios_tag(). > > ... because? It should already search backwards in the page cache, > otherwise calling sync_file_range() would skip the start if it landed > in a tail page of a folio. Thanks! Will drop and verify! Luis
diff --git a/mm/filemap.c b/mm/filemap.c index 15bc810bfc89..21e1341526ab 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2280,7 +2280,9 @@ EXPORT_SYMBOL(filemap_get_folios_contig); unsigned filemap_get_folios_tag(struct address_space *mapping, pgoff_t *start, pgoff_t end, xa_mark_t tag, struct folio_batch *fbatch) { - XA_STATE(xas, &mapping->i_pages, *start); + unsigned int min_order = mapping_min_folio_order(mapping); + unsigned int nrpages = 1UL << min_order; + XA_STATE(xas, &mapping->i_pages, round_down(*start, nrpages)); struct folio *folio; rcu_read_lock();