Message ID | 20230919135536.2165715-3-da.gomez@samsung.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2,1/6] shmem: drop BLOCKS_PER_PAGE macro | expand |
On Tue, Sep 19, 2023 at 01:55:47PM +0000, Daniel Gomez wrote: > +++ b/mm/shmem.c > @@ -846,16 +846,18 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap) > /* > * Remove swap entry from page cache, free the swap and its page cache. > */ > -static int shmem_free_swap(struct address_space *mapping, > +static long shmem_free_swap(struct address_space *mapping, > pgoff_t index, void *radswap) > { > void *old; > > old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0); > if (old != radswap) > - return -ENOENT; > + return 0; > + > free_swap_and_cache(radix_to_swp_entry(radswap)); > - return 0; > + > + return folio_nr_pages((struct folio *)radswap); > } Oh my goodness. I have led you astray; my apologies. shmem_free_swap() is called when the 'folio' is NOT actually a folio. It's an 'exceptional' / 'value' entry. We can't do this. Do we encode the size of the swap entry in the swp_entry_t or do we have to get that information from the XArray (which no longer knows it after we've stored a NULL there)?
diff --git a/mm/shmem.c b/mm/shmem.c index de0d0fa0349e..5c9e80207cbf 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -846,16 +846,18 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap) /* * Remove swap entry from page cache, free the swap and its page cache. */ -static int shmem_free_swap(struct address_space *mapping, +static long shmem_free_swap(struct address_space *mapping, pgoff_t index, void *radswap) { void *old; old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0); if (old != radswap) - return -ENOENT; + return 0; + free_swap_and_cache(radix_to_swp_entry(radswap)); - return 0; + + return folio_nr_pages((struct folio *)radswap); } /* @@ -1008,7 +1010,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, if (xa_is_value(folio)) { if (unfalloc) continue; - nr_swaps_freed += !shmem_free_swap(mapping, + nr_swaps_freed += shmem_free_swap(mapping, indices[i], folio); continue; } @@ -1077,12 +1079,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, if (xa_is_value(folio)) { if (unfalloc) continue; - if (shmem_free_swap(mapping, indices[i], folio)) { + nr_swaps_freed += shmem_free_swap(mapping, indices[i], folio); + if (!nr_swaps_freed) { /* Swap was replaced by page: retry */ index = indices[i]; break; } - nr_swaps_freed++; continue; }
Both shmem_free_swap callers require to get the number of pages in the folio after calling shmem_free_swap. Make shmem_free_swap return the expected value directly and return 0 number of pages being freed to avoid error handling in the external accounting. Suggested-by: Matthew Wilcox <willy@infradead.org> Signed-off-by: Daniel Gomez <da.gomez@samsung.com> --- mm/shmem.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)