diff mbox series

[2/3] mm/migrate: Convert isolate_movable_page() to use folios

Message ID 20230121005622.57808-3-vishal.moola@gmail.com (mailing list archive)
State New
Headers show
Series Convert a couple migrate functions to use folios | expand

Commit Message

Vishal Moola Jan. 21, 2023, 12:56 a.m. UTC
Removes 6 calls to compound_head() and prepares the function to take in a
folio instead of page argument.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 mm/migrate.c | 39 ++++++++++++++++++++-------------------
 1 file changed, 20 insertions(+), 19 deletions(-)

Comments

Matthew Wilcox Jan. 22, 2023, 12:46 p.m. UTC | #1
On Fri, Jan 20, 2023 at 04:56:21PM -0800, Vishal Moola (Oracle) wrote:
>  int isolate_movable_page(struct page *page, isolate_mode_t mode)
>  {
> +	struct folio *folio = page_folio(page);
>  	const struct movable_operations *mops;
>  
>  	/*
> @@ -71,11 +72,11 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode)
>  	 * the put_page() at the end of this block will take care of
>  	 * release this page, thus avoiding a nasty leakage.
>  	 */
> -	if (unlikely(!get_page_unless_zero(page)))
> +	if (unlikely(!folio_try_get(folio)))

This changes behaviour.  Previously when called on a tail page, the
call failed.  Now it succeeds, getting a ref on something that at
least was the folio head at some point.

If you're going to do this, you need to recheck that the page is still
part of the folio after getting the ref (see gup.c for an example).
But I think we should probably maintain the behaviour of failing on
tail pages.

Maybe something like ...

	if (unlikely(!get_page_unless_zero(page)))
		goto out;
	/* Refcount is zero on tail pages, so we must have a head */
	folio = (struct folio *)page;
Matthew Wilcox Jan. 23, 2023, 3:48 p.m. UTC | #2
On Sun, Jan 22, 2023 at 12:46:34PM +0000, Matthew Wilcox wrote:
> On Fri, Jan 20, 2023 at 04:56:21PM -0800, Vishal Moola (Oracle) wrote:
> >  int isolate_movable_page(struct page *page, isolate_mode_t mode)
> >  {
> > +	struct folio *folio = page_folio(page);
> >  	const struct movable_operations *mops;
> >  
> >  	/*
> > @@ -71,11 +72,11 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode)
> >  	 * the put_page() at the end of this block will take care of
> >  	 * release this page, thus avoiding a nasty leakage.
> >  	 */
> > -	if (unlikely(!get_page_unless_zero(page)))
> > +	if (unlikely(!folio_try_get(folio)))
> 
> This changes behaviour.  Previously when called on a tail page, the
> call failed.  Now it succeeds, getting a ref on something that at
> least was the folio head at some point.
> 
> If you're going to do this, you need to recheck that the page is still
> part of the folio after getting the ref (see gup.c for an example).
> But I think we should probably maintain the behaviour of failing on
> tail pages.
> 
> Maybe something like ...
> 
> 	if (unlikely(!get_page_unless_zero(page)))
> 		goto out;
> 	/* Refcount is zero on tail pages, so we must have a head */
> 	folio = (struct folio *)page;

I've been thinking about this some more as I don't like doing these
kinds of casts (except in the helper functions).  What do you think
to adding:

struct folio *folio_get_nontail_page(struct page *)
{
	if unlikely(!get_page_unless_zero(page))
		return NULL;
	return (struct folio *)page;
}

and then isolate_movable_page() looks like:

	struct folio *folio;
[...]

	folio = folio_get_nontail_page(page);
	if (!folio)
		goto out;

I keep thinking about how this is all going to work when we get to
one-pointer-per-page.  Telling tail pages from head pages becomes hard.
This probably becomes an out-of-line function that looks something like ..

	struct memdesc *memdesc = READ_ONCE(page->memdesc);
	struct folio *folio;

	if (!memdesc_is_folio(memdesc))
		return NULL;
	folio = memdesc_folio(memdesc);
	if (!folio_try_get(folio))
		return NULL;
	if (READ_ONCE(page->memdesc) != memdesc ||
	    folio->pfn != page_pfn(page)) {
		folio_put(folio);
		return NULL;
	}

	return folio;

(note: We need to check that page->memdesc still points to this folio after 
getting the refcount on it.  we could loop around if it fails, but
failing the entire get is OK; if memdesc changed then either before this
function was called, after this function was called or while this
function was called, the refcount on the memdesc it was pointing to
was zero)
diff mbox series

Patch

diff --git a/mm/migrate.c b/mm/migrate.c
index 4c1776445c74..bcde3cbbc8c9 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -60,6 +60,7 @@ 
 
 int isolate_movable_page(struct page *page, isolate_mode_t mode)
 {
+	struct folio *folio = page_folio(page);
 	const struct movable_operations *mops;
 
 	/*
@@ -71,11 +72,11 @@  int isolate_movable_page(struct page *page, isolate_mode_t mode)
 	 * the put_page() at the end of this block will take care of
 	 * release this page, thus avoiding a nasty leakage.
 	 */
-	if (unlikely(!get_page_unless_zero(page)))
+	if (unlikely(!folio_try_get(folio)))
 		goto out;
 
-	if (unlikely(PageSlab(page)))
-		goto out_putpage;
+	if (unlikely(folio_test_slab(folio)))
+		goto out_putfolio;
 	/* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */
 	smp_rmb();
 	/*
@@ -83,12 +84,12 @@  int isolate_movable_page(struct page *page, isolate_mode_t mode)
 	 * we use non-atomic bitops on newly allocated page flags so
 	 * unconditionally grabbing the lock ruins page's owner side.
 	 */
-	if (unlikely(!__PageMovable(page)))
-		goto out_putpage;
+	if (unlikely(!__folio_test_movable(folio)))
+		goto out_putfolio;
 	/* Pairs with smp_wmb() in slab allocation, e.g. SLUB's alloc_slab_page() */
 	smp_rmb();
-	if (unlikely(PageSlab(page)))
-		goto out_putpage;
+	if (unlikely(folio_test_slab(folio)))
+		goto out_putfolio;
 
 	/*
 	 * As movable pages are not isolated from LRU lists, concurrent
@@ -101,29 +102,29 @@  int isolate_movable_page(struct page *page, isolate_mode_t mode)
 	 * lets be sure we have the page lock
 	 * before proceeding with the movable page isolation steps.
 	 */
-	if (unlikely(!trylock_page(page)))
-		goto out_putpage;
+	if (unlikely(!folio_trylock(folio)))
+		goto out_putfolio;
 
-	if (!PageMovable(page) || PageIsolated(page))
+	if (!folio_test_movable(folio) || folio_test_isolated(folio))
 		goto out_no_isolated;
 
-	mops = page_movable_ops(page);
-	VM_BUG_ON_PAGE(!mops, page);
+	mops = folio_movable_ops(folio);
+	VM_BUG_ON_FOLIO(!mops, folio);
 
-	if (!mops->isolate_page(page, mode))
+	if (!mops->isolate_page(&folio->page, mode))
 		goto out_no_isolated;
 
 	/* Driver shouldn't use PG_isolated bit of page->flags */
-	WARN_ON_ONCE(PageIsolated(page));
-	SetPageIsolated(page);
-	unlock_page(page);
+	WARN_ON_ONCE(folio_test_isolated(folio));
+	folio_set_isolated(folio);
+	folio_unlock(folio);
 
 	return 0;
 
 out_no_isolated:
-	unlock_page(page);
-out_putpage:
-	put_page(page);
+	folio_unlock(folio);
+out_putfolio:
+	folio_put(folio);
 out:
 	return -EBUSY;
 }