Message ID | 20230915183848.1018717-5-kernel@pankajraghav.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Enable block size > page size in XFS | expand |
On Fri, Sep 15, 2023 at 08:38:29PM +0200, Pankaj Raghav wrote: > From: Luis Chamberlain <mcgrof@kernel.org> > > Similar to page_cache_delete(), call xas_set_order for non-hugetlb pages > while deleting an entry from the page cache. Is this necessary? As I read xas_store(), if you're storing NULL, it will wipe out all sibling entries. Was this based on "oops, no, it doesn't" or "here is a gratuitous difference, change it"?
On Fri, Sep 15, 2023 at 08:43:28PM +0100, Matthew Wilcox wrote: > On Fri, Sep 15, 2023 at 08:38:29PM +0200, Pankaj Raghav wrote: > > From: Luis Chamberlain <mcgrof@kernel.org> > > > > Similar to page_cache_delete(), call xas_set_order for non-hugetlb pages > > while deleting an entry from the page cache. > > Is this necessary? As I read xas_store(), if you're storing NULL, it > will wipe out all sibling entries. Was this based on "oops, no, it > doesn't" or "here is a gratuitous difference, change it"? Based on code inspection, I saw page_cache_delete() did it. The xarray docs and xarray selftest was not clear about the advanced API about this case and the usage of the set order on page_cache_delete() gave me concerns we needed it here. We do have some enhancements to xarray self tests to use the advanced API which we could extend with this particular case before posting, so to prove disprove if this is really needed. Why would it be needed on page_cache_delete() but needed here? Luis
diff --git a/mm/filemap.c b/mm/filemap.c index b1ce63143df5..2c47729dc8b0 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -126,6 +126,7 @@ static void page_cache_delete(struct address_space *mapping, struct folio *folio, void *shadow) { + unsigned int min_order = mapping_min_folio_order(mapping); XA_STATE(xas, &mapping->i_pages, folio->index); long nr = 1; @@ -134,6 +135,7 @@ static void page_cache_delete(struct address_space *mapping, xas_set_order(&xas, folio->index, folio_order(folio)); nr = folio_nr_pages(folio); + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); xas_store(&xas, shadow); @@ -276,6 +278,7 @@ void filemap_remove_folio(struct folio *folio) static void page_cache_delete_batch(struct address_space *mapping, struct folio_batch *fbatch) { + unsigned int min_order = mapping_min_folio_order(mapping); XA_STATE(xas, &mapping->i_pages, fbatch->folios[0]->index); long total_pages = 0; int i = 0; @@ -304,6 +307,11 @@ static void page_cache_delete_batch(struct address_space *mapping, WARN_ON_ONCE(!folio_test_locked(folio)); + /* hugetlb pages are represented by a single entry in the xarray */ + if (!folio_test_hugetlb(folio)) { + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio); + xas_set_order(&xas, folio->index, folio_order(folio)); + } folio->mapping = NULL; /* Leave folio->index set: truncation lookup relies on it */