Message ID | 20200315012920.2687-1-richard.weiyang@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] mm/swap_state.c: use the same way to count page in [add_to|delete_from]_swap_cache | expand |
diff --git a/mm/swap_state.c b/mm/swap_state.c index 8e7ce9a9bc5e..ebed37bbf7a3 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -116,7 +116,7 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) struct address_space *address_space = swap_address_space(entry); pgoff_t idx = swp_offset(entry); XA_STATE_ORDER(xas, &address_space->i_pages, idx, compound_order(page)); - unsigned long i, nr = compound_nr(page); + unsigned long i, nr = hpage_nr_pages(page); VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageSwapCache(page), page);
Function add_to_swap_cache() and delete_from_swap_cache() are counter parts, while currently they use different way to count page. It doesn't break any thing because we only have two size for PageAnon, but this is confusing and not a good practice. This patch corrects it by both using hpage_nr_pages(). Signed-off-by: Wei Yang <richard.weiyang@gmail.com> CC: Matthew Wilcox <willy@infradead.org> --- v2: change to hpage_nr_pages() which is opt. suggested by Matthew Wilcox --- mm/swap_state.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)