diff mbox series

[v5,2/5] mm,compaction: Let isolate_migratepages_{range,block} return error codes

Message ID 20210317111251.17808-3-osalvador@suse.de (mailing list archive)
State New, archived
Headers show
Series Make alloc_contig_range handle Hugetlb pages | expand

Commit Message

Oscar Salvador March 17, 2021, 11:12 a.m. UTC
Currently, isolate_migratepages_{range,block} and their callers use
a pfn == 0 vs pfn != 0 scheme to let the caller know whether there was
any error during isolation.
This does not work as soon as we need to start reporting different error
codes and make sure we pass them down the chain, so they are properly
interpreted by functions like e.g: alloc_contig_range.

Let us rework isolate_migratepages_{range,block} so we can report error
codes.
Since isolate_migratepages_block will stop returning the next pfn to be
scanned, we reuse the cc->migrate_pfn field to keep track of that.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/compaction.c | 48 ++++++++++++++++++++++++------------------------
 mm/internal.h   |  2 +-
 mm/page_alloc.c |  7 +++----
 3 files changed, 28 insertions(+), 29 deletions(-)

Comments

Michal Hocko March 17, 2021, 2:12 p.m. UTC | #1
On Wed 17-03-21 12:12:48, Oscar Salvador wrote:
> Currently, isolate_migratepages_{range,block} and their callers use
> a pfn == 0 vs pfn != 0 scheme to let the caller know whether there was
> any error during isolation.
> This does not work as soon as we need to start reporting different error
> codes and make sure we pass them down the chain, so they are properly
> interpreted by functions like e.g: alloc_contig_range.
> 
> Let us rework isolate_migratepages_{range,block} so we can report error
> codes.

Yes this is an improvement.

> Since isolate_migratepages_block will stop returning the next pfn to be
> scanned, we reuse the cc->migrate_pfn field to keep track of that.

This looks hakish and I cannot really tell that users of cc->migrate_pfn
work as intended.
> @@ -810,6 +811,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  	unsigned long next_skip_pfn = 0;
>  	bool skip_updated = false;
>  
> +	cc->migrate_pfn = low_pfn;
> +
>  	/*
>  	 * Ensure that there are not too many pages isolated from the LRU
>  	 * list by either parallel reclaimers or compaction. If there are,
> @@ -818,16 +821,16 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  	while (unlikely(too_many_isolated(pgdat))) {
>  		/* stop isolation if there are still pages not migrated */
>  		if (cc->nr_migratepages)
> -			return 0;
> +			return -EINTR;
>  
>  		/* async migration should just abort */
>  		if (cc->mode == MIGRATE_ASYNC)
> -			return 0;
> +			return -EINTR;

EINTR for anything other than signal based bail out is really confusing.
Oscar Salvador March 17, 2021, 2:38 p.m. UTC | #2
On Wed, Mar 17, 2021 at 03:12:29PM +0100, Michal Hocko wrote:
> > Since isolate_migratepages_block will stop returning the next pfn to be
> > scanned, we reuse the cc->migrate_pfn field to keep track of that.
> 
> This looks hakish and I cannot really tell that users of cc->migrate_pfn
> work as intended.

When discussing this with Vlastimil, I came up with the idea of adding a new
field in compact_control struct, e.g: next_pfn_scan to keep track of the next
pfn to be scanned.

But Vlastimil made me realize that since cc->migrate_pfn points to that aleady,
so we do not need any extra field.

> > @@ -810,6 +811,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> >  	unsigned long next_skip_pfn = 0;
> >  	bool skip_updated = false;
> >  
> > +	cc->migrate_pfn = low_pfn;
> > +
> >  	/*
> >  	 * Ensure that there are not too many pages isolated from the LRU
> >  	 * list by either parallel reclaimers or compaction. If there are,
> > @@ -818,16 +821,16 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> >  	while (unlikely(too_many_isolated(pgdat))) {
> >  		/* stop isolation if there are still pages not migrated */
> >  		if (cc->nr_migratepages)
> > -			return 0;
> > +			return -EINTR;
> >  
> >  		/* async migration should just abort */
> >  		if (cc->mode == MIGRATE_ASYNC)
> > -			return 0;
> > +			return -EINTR;
> 
> EINTR for anything other than signal based bail out is really confusing.

When coding that, I thought about using -1 for the first two checks, and keep
-EINTR for the signal check, but isolate_migratepages_block only has two users:

- isolate_migratepages: Does not care about the return code other than pfn != 0,
  and it does not pass the error down the chain.
- isolate_migratepages_range: The error is passed down the chain, and !pfn is being
  treated as -EINTR:

static int __alloc_contig_migrate_range(struct compact_control *cc,
					unsigned long start, unsigned long end)
 {
  ...
  ...
  pfn = isolate_migratepages_range(cc, pfn, end);
  if (!pfn) {
          ret = -EINTR;
          break;
  }
  ...
 }

That is why I decided to stick with -EINTR.
Michal Hocko March 17, 2021, 2:59 p.m. UTC | #3
On Wed 17-03-21 15:38:35, Oscar Salvador wrote:
> On Wed, Mar 17, 2021 at 03:12:29PM +0100, Michal Hocko wrote:
> > > Since isolate_migratepages_block will stop returning the next pfn to be
> > > scanned, we reuse the cc->migrate_pfn field to keep track of that.
> > 
> > This looks hakish and I cannot really tell that users of cc->migrate_pfn
> > work as intended.
> 
> When discussing this with Vlastimil, I came up with the idea of adding a new
> field in compact_control struct, e.g: next_pfn_scan to keep track of the next
> pfn to be scanned.
> 
> But Vlastimil made me realize that since cc->migrate_pfn points to that aleady,
> so we do not need any extra field.

This deserves a big fat comment.

> > > @@ -810,6 +811,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> > >  	unsigned long next_skip_pfn = 0;
> > >  	bool skip_updated = false;
> > >  
> > > +	cc->migrate_pfn = low_pfn;
> > > +
> > >  	/*
> > >  	 * Ensure that there are not too many pages isolated from the LRU
> > >  	 * list by either parallel reclaimers or compaction. If there are,
> > > @@ -818,16 +821,16 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> > >  	while (unlikely(too_many_isolated(pgdat))) {
> > >  		/* stop isolation if there are still pages not migrated */
> > >  		if (cc->nr_migratepages)
> > > -			return 0;
> > > +			return -EINTR;
> > >  
> > >  		/* async migration should just abort */
> > >  		if (cc->mode == MIGRATE_ASYNC)
> > > -			return 0;
> > > +			return -EINTR;
> > 
> > EINTR for anything other than signal based bail out is really confusing.
> 
> When coding that, I thought about using -1 for the first two checks, and keep
> -EINTR for the signal check, but isolate_migratepages_block only has two users:

No, do not mix error reporting with different semantic. Either make it
errno or return -1 for all failures if you do not care which error that
is. You do care and hence this patch so make that errno and above two
should simply EAGAIN as this is a congestion situation.

> - isolate_migratepages: Does not care about the return code other than pfn != 0,
>   and it does not pass the error down the chain.
> - isolate_migratepages_range: The error is passed down the chain, and !pfn is being
>   treated as -EINTR:
> 
> static int __alloc_contig_migrate_range(struct compact_control *cc,
> 					unsigned long start, unsigned long end)
>  {
>   ...
>   ...
>   pfn = isolate_migratepages_range(cc, pfn, end);
>   if (!pfn) {
>           ret = -EINTR;
>           break;
>   }
>   ...
>  }
> 
> That is why I decided to stick with -EINTR.

I suspect this is only because there was not really a better way to tell
the failure so it went with EINTR which makes alloc_contig_range bail
out. The high level handling there is quite dubious as EAGAIN is already
possible from the page migration path and that shouldn't be a fatal
failure.
Vlastimil Babka March 18, 2021, 9:50 a.m. UTC | #4
On 3/17/21 3:59 PM, Michal Hocko wrote:
> On Wed 17-03-21 15:38:35, Oscar Salvador wrote:
>> On Wed, Mar 17, 2021 at 03:12:29PM +0100, Michal Hocko wrote:
>> > > Since isolate_migratepages_block will stop returning the next pfn to be
>> > > scanned, we reuse the cc->migrate_pfn field to keep track of that.
>> > 
>> > This looks hakish and I cannot really tell that users of cc->migrate_pfn
>> > work as intended.

We did check those in detail. Of course it's possible to overlook something...

The alloc_contig_range user never cared about cc->migrate_pfn. compaction
(isolate_migratepages() -> isolate_migratepages_block()) did, and
isolate_migratepages_block() returned the pfn only to be assigned to
cc->migrate_pfn in isolate_migratepages(). I think it's now better that
isolate_migratepages_block() sets it.

>> When discussing this with Vlastimil, I came up with the idea of adding a new
>> field in compact_control struct, e.g: next_pfn_scan to keep track of the next
>> pfn to be scanned.
>> 
>> But Vlastimil made me realize that since cc->migrate_pfn points to that aleady,
>> so we do not need any extra field.

Yes, the first patch had at asome point:

	/* Record where migration scanner will be restarted. */
	cc->migrate_pfn = cc->the_new_field;

Which was a clear sign that the new field is unnecessary.

> This deserves a big fat comment.

Comment where, saying what? :)
Michal Hocko March 18, 2021, 10:22 a.m. UTC | #5
On Thu 18-03-21 10:50:38, Vlastimil Babka wrote:
> On 3/17/21 3:59 PM, Michal Hocko wrote:
> > On Wed 17-03-21 15:38:35, Oscar Salvador wrote:
> >> On Wed, Mar 17, 2021 at 03:12:29PM +0100, Michal Hocko wrote:
> >> > > Since isolate_migratepages_block will stop returning the next pfn to be
> >> > > scanned, we reuse the cc->migrate_pfn field to keep track of that.
> >> > 
> >> > This looks hakish and I cannot really tell that users of cc->migrate_pfn
> >> > work as intended.
> 
> We did check those in detail. Of course it's possible to overlook something...
> 
> The alloc_contig_range user never cared about cc->migrate_pfn. compaction
> (isolate_migratepages() -> isolate_migratepages_block()) did, and
> isolate_migratepages_block() returned the pfn only to be assigned to
> cc->migrate_pfn in isolate_migratepages(). I think it's now better that
> isolate_migratepages_block() sets it.
> 
> >> When discussing this with Vlastimil, I came up with the idea of adding a new
> >> field in compact_control struct, e.g: next_pfn_scan to keep track of the next
> >> pfn to be scanned.
> >> 
> >> But Vlastimil made me realize that since cc->migrate_pfn points to that aleady,
> >> so we do not need any extra field.
> 
> Yes, the first patch had at asome point:
> 
> 	/* Record where migration scanner will be restarted. */
> 	cc->migrate_pfn = cc->the_new_field;
> 
> Which was a clear sign that the new field is unnecessary.
> 
> > This deserves a big fat comment.
> 
> Comment where, saying what? :)

E.g. something like the following
diff --git a/mm/internal.h b/mm/internal.h
index 1432feec62df..6c5a9066adf0 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -225,7 +225,13 @@ struct compact_control {
 	unsigned int nr_freepages;	/* Number of isolated free pages */
 	unsigned int nr_migratepages;	/* Number of pages to migrate */
 	unsigned long free_pfn;		/* isolate_freepages search base */
-	unsigned long migrate_pfn;	/* isolate_migratepages search base */
+	unsigned long migrate_pfn;	/* Acts as an in/out parameter to page
+					 * isolation.
+					 * isolate_migratepages uses it as a search base.
+					 * isolate_migratepages_block will update the
+					 * value the next pfn after the last isolated
+					 * one.
+					 */
 	unsigned long fast_start_pfn;	/* a pfn to start linear scan from */
 	struct zone *zone;
 	unsigned long total_migrate_scanned;

Btw isolate_migratepages_block still has this comment which needs
updating
"The cc->migrate_pfn field is neither read nor updated."
Vlastimil Babka March 18, 2021, 11:10 a.m. UTC | #6
On 3/18/21 11:22 AM, Michal Hocko wrote:
> On Thu 18-03-21 10:50:38, Vlastimil Babka wrote:
>> On 3/17/21 3:59 PM, Michal Hocko wrote:
>> > On Wed 17-03-21 15:38:35, Oscar Salvador wrote:
>> >> On Wed, Mar 17, 2021 at 03:12:29PM +0100, Michal Hocko wrote:
>> >> > > Since isolate_migratepages_block will stop returning the next pfn to be
>> >> > > scanned, we reuse the cc->migrate_pfn field to keep track of that.
>> >> > 
>> >> > This looks hakish and I cannot really tell that users of cc->migrate_pfn
>> >> > work as intended.
>> 
>> We did check those in detail. Of course it's possible to overlook something...
>> 
>> The alloc_contig_range user never cared about cc->migrate_pfn. compaction
>> (isolate_migratepages() -> isolate_migratepages_block()) did, and
>> isolate_migratepages_block() returned the pfn only to be assigned to
>> cc->migrate_pfn in isolate_migratepages(). I think it's now better that
>> isolate_migratepages_block() sets it.
>> 
>> >> When discussing this with Vlastimil, I came up with the idea of adding a new
>> >> field in compact_control struct, e.g: next_pfn_scan to keep track of the next
>> >> pfn to be scanned.
>> >> 
>> >> But Vlastimil made me realize that since cc->migrate_pfn points to that aleady,
>> >> so we do not need any extra field.
>> 
>> Yes, the first patch had at asome point:
>> 
>> 	/* Record where migration scanner will be restarted. */
>> 	cc->migrate_pfn = cc->the_new_field;
>> 
>> Which was a clear sign that the new field is unnecessary.
>> 
>> > This deserves a big fat comment.
>> 
>> Comment where, saying what? :)
> 
> E.g. something like the following
> diff --git a/mm/internal.h b/mm/internal.h
> index 1432feec62df..6c5a9066adf0 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -225,7 +225,13 @@ struct compact_control {
>  	unsigned int nr_freepages;	/* Number of isolated free pages */
>  	unsigned int nr_migratepages;	/* Number of pages to migrate */
>  	unsigned long free_pfn;		/* isolate_freepages search base */
> -	unsigned long migrate_pfn;	/* isolate_migratepages search base */
> +	unsigned long migrate_pfn;	/* Acts as an in/out parameter to page
> +					 * isolation.
> +					 * isolate_migratepages uses it as a search base.
> +					 * isolate_migratepages_block will update the
> +					 * value the next pfn after the last isolated
> +					 * one.
> +					 */

Fair enough. I would even stop pretending we might cram something useful in the
rest of the line, and move all the comments to blocks before the variables.
There might be more of them that would deserve more thorough description.

>  	unsigned long fast_start_pfn;	/* a pfn to start linear scan from */
>  	struct zone *zone;
>  	unsigned long total_migrate_scanned;
> 
> Btw isolate_migratepages_block still has this comment which needs
> updating
> "The cc->migrate_pfn field is neither read nor updated."

Good catch.
Michal Hocko March 18, 2021, 11:36 a.m. UTC | #7
On Thu 18-03-21 12:10:14, Vlastimil Babka wrote:
> On 3/18/21 11:22 AM, Michal Hocko wrote:
[...]
> > E.g. something like the following
> > diff --git a/mm/internal.h b/mm/internal.h
> > index 1432feec62df..6c5a9066adf0 100644
> > --- a/mm/internal.h
> > +++ b/mm/internal.h
> > @@ -225,7 +225,13 @@ struct compact_control {
> >  	unsigned int nr_freepages;	/* Number of isolated free pages */
> >  	unsigned int nr_migratepages;	/* Number of pages to migrate */
> >  	unsigned long free_pfn;		/* isolate_freepages search base */
> > -	unsigned long migrate_pfn;	/* isolate_migratepages search base */
> > +	unsigned long migrate_pfn;	/* Acts as an in/out parameter to page
> > +					 * isolation.
> > +					 * isolate_migratepages uses it as a search base.
> > +					 * isolate_migratepages_block will update the
> > +					 * value the next pfn after the last isolated
> > +					 * one.
> > +					 */
> 
> Fair enough. I would even stop pretending we might cram something useful in the
> rest of the line, and move all the comments to blocks before the variables.
> There might be more of them that would deserve more thorough description.

Yeah, makes sense. I am not a fan of the above form of documentation.
Btw. maybe renaming the field would be even better, both from the
intention and review all existing users. I would go with pfn_iter or
something that wouldn't make it sound like migration specific.
Oscar Salvador March 19, 2021, 9:57 a.m. UTC | #8
On Thu, Mar 18, 2021 at 12:36:52PM +0100, Michal Hocko wrote:
> Yeah, makes sense. I am not a fan of the above form of documentation.
> Btw. maybe renaming the field would be even better, both from the
> intention and review all existing users. I would go with pfn_iter or
> something that wouldn't make it sound like migration specific.

Just to be sure we are on the same page, you meant something like the following
(wrt. comments):

 /*
  * compact_control is used to track pages being migrated and the free pages
  * they are being migrated to during memory compaction. The free_pfn starts
  * at the end of a zone and migrate_pfn begins at the start. Movable pages
  * are moved to the end of a zone during a compaction run and the run
  * completes when free_pfn <= migrate_pfn
  *
  * freepages:           List of free pages to migrate to
  * migratepages:        List of pages that need to be migrated
  * nr_freepages:        Number of isolated free pages
  ...
  */
  struct compact_control {
          struct list_head freepages;
          ...

With the preface that I am not really familiar with compaction code:

About renaming the variable to something else, I wouldn't do it.
I see migrate_pfn being used in contexts where migration gets mentioned,
e.g: 

 /*
  * Briefly search the free lists for a migration source that already has
  * some free pages to reduce the number of pages that need migration
  * before a pageblock is free.
  */
 fast_find_migrateblock(struct compact_control *cc)
 {
  ...
  unsigned long pfn = cc->migrate_pfn;
 }

isolate_migratepages()
 /* Record where migration scanner will be restarted. */


So, I would either stick with it, or add a new 'iter_pfn'/'next_pfn_scan'
field if we feel the need to.
Vlastimil Babka March 19, 2021, 10:14 a.m. UTC | #9
On 3/19/21 10:57 AM, Oscar Salvador wrote:
> On Thu, Mar 18, 2021 at 12:36:52PM +0100, Michal Hocko wrote:
>> Yeah, makes sense. I am not a fan of the above form of documentation.
>> Btw. maybe renaming the field would be even better, both from the
>> intention and review all existing users. I would go with pfn_iter or
>> something that wouldn't make it sound like migration specific.
> 
> Just to be sure we are on the same page, you meant something like the following
> (wrt. comments):
> 
>  /*
>   * compact_control is used to track pages being migrated and the free pages
>   * they are being migrated to during memory compaction. The free_pfn starts
>   * at the end of a zone and migrate_pfn begins at the start. Movable pages
>   * are moved to the end of a zone during a compaction run and the run
>   * completes when free_pfn <= migrate_pfn
>   *
>   * freepages:           List of free pages to migrate to
>   * migratepages:        List of pages that need to be migrated
>   * nr_freepages:        Number of isolated free pages
>   ...
>   */
>   struct compact_control {
>           struct list_head freepages;
>           ...

No I meant this:

--- a/mm/internal.h
+++ b/mm/internal.h
@@ -225,7 +225,13 @@ struct compact_control {
        unsigned int nr_freepages;      /* Number of isolated free pages */
        unsigned int nr_migratepages;   /* Number of pages to migrate */
        unsigned long free_pfn;         /* isolate_freepages search base */
-       unsigned long migrate_pfn;      /* isolate_migratepages search base */
+       /*
+        * Acts as an in/out parameter to page isolation for migration.
+        * isolate_migratepages uses it as a search base.
+        * isolate_migratepages_block will update the value to the next pfn
+        * after the last isolated one.
+        */
+       unsigned long migrate_pfn;
        unsigned long fast_start_pfn;   /* a pfn to start linear scan from */
        struct zone *zone;
        unsigned long total_migrate_scanned;


> With the preface that I am not really familiar with compaction code:
> 
> About renaming the variable to something else, I wouldn't do it.
> I see migrate_pfn being used in contexts where migration gets mentioned,
> e.g: 

I also don't like the renaming much. "Migration" is important as this is about
pages to be migrated, and there's "free_pfn" field tracking scan for free pages as
migration target. So the name can't be as generic as "pfn_iter".

>  /*
>   * Briefly search the free lists for a migration source that already has
>   * some free pages to reduce the number of pages that need migration
>   * before a pageblock is free.
>   */
>  fast_find_migrateblock(struct compact_control *cc)
>  {
>   ...
>   unsigned long pfn = cc->migrate_pfn;
>  }
> 
> isolate_migratepages()
>  /* Record where migration scanner will be restarted. */
> 
> 
> So, I would either stick with it, or add a new 'iter_pfn'/'next_pfn_scan'
> field if we feel the need to.
> 
>
Oscar Salvador March 19, 2021, 10:26 a.m. UTC | #10
On Fri, Mar 19, 2021 at 11:14:25AM +0100, Vlastimil Babka wrote:
> No I meant this:
> 
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -225,7 +225,13 @@ struct compact_control {
>         unsigned int nr_freepages;      /* Number of isolated free pages */
>         unsigned int nr_migratepages;   /* Number of pages to migrate */
>         unsigned long free_pfn;         /* isolate_freepages search base */
> -       unsigned long migrate_pfn;      /* isolate_migratepages search base */
> +       /*
> +        * Acts as an in/out parameter to page isolation for migration.
> +        * isolate_migratepages uses it as a search base.
> +        * isolate_migratepages_block will update the value to the next pfn
> +        * after the last isolated one.
> +        */
> +       unsigned long migrate_pfn;
>         unsigned long fast_start_pfn;   /* a pfn to start linear scan from */
>         struct zone *zone;
>         unsigned long total_migrate_scanned;

Meh, silly me.
Ok, I will do it that way.

I am also for expanding some of the comments as I see that some explanations are
rather laconic, but I do not think such work fits in this patchset.

Since I happen to be checking compaction code due to other reasons, I shall
come back to this matter once I am done with this patchset.
diff mbox series

Patch

diff --git a/mm/compaction.c b/mm/compaction.c
index e04f4476e68e..5769753a8f60 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -787,15 +787,16 @@  static bool too_many_isolated(pg_data_t *pgdat)
  *
  * Isolate all pages that can be migrated from the range specified by
  * [low_pfn, end_pfn). The range is expected to be within same pageblock.
- * Returns zero if there is a fatal signal pending, otherwise PFN of the
- * first page that was not scanned (which may be both less, equal to or more
- * than end_pfn).
+ * Returns -EINTR in case we need to abort when we have too many isolated pages
+ * due to e.g: signal pending, async mode or having still pages to migrate, or 0.
+ * cc->migrate_pfn will contain the next pfn to scan (which may be both less,
+ * equal to or more that end_pfn).
  *
  * The pages are isolated on cc->migratepages list (not required to be empty),
  * and cc->nr_migratepages is updated accordingly. The cc->migrate_pfn field
  * is neither read nor updated.
  */
-static unsigned long
+static int
 isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			unsigned long end_pfn, isolate_mode_t isolate_mode)
 {
@@ -810,6 +811,8 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 	unsigned long next_skip_pfn = 0;
 	bool skip_updated = false;
 
+	cc->migrate_pfn = low_pfn;
+
 	/*
 	 * Ensure that there are not too many pages isolated from the LRU
 	 * list by either parallel reclaimers or compaction. If there are,
@@ -818,16 +821,16 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 	while (unlikely(too_many_isolated(pgdat))) {
 		/* stop isolation if there are still pages not migrated */
 		if (cc->nr_migratepages)
-			return 0;
+			return -EINTR;
 
 		/* async migration should just abort */
 		if (cc->mode == MIGRATE_ASYNC)
-			return 0;
+			return -EINTR;
 
 		congestion_wait(BLK_RW_ASYNC, HZ/10);
 
 		if (fatal_signal_pending(current))
-			return 0;
+			return -EINTR;
 	}
 
 	cond_resched();
@@ -1130,7 +1133,9 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 	if (nr_isolated)
 		count_compact_events(COMPACTISOLATED, nr_isolated);
 
-	return low_pfn;
+	cc->migrate_pfn = low_pfn;
+
+	return 0;
 }
 
 /**
@@ -1139,15 +1144,15 @@  isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
  * @start_pfn: The first PFN to start isolating.
  * @end_pfn:   The one-past-last PFN.
  *
- * Returns zero if isolation fails fatally due to e.g. pending signal.
- * Otherwise, function returns one-past-the-last PFN of isolated page
- * (which may be greater than end_pfn if end fell in a middle of a THP page).
+ * Returns -EINTR in case isolation fails fatally due to e.g. pending signal,
+ * or 0.
  */
-unsigned long
+int
 isolate_migratepages_range(struct compact_control *cc, unsigned long start_pfn,
 							unsigned long end_pfn)
 {
 	unsigned long pfn, block_start_pfn, block_end_pfn;
+	int ret = 0;
 
 	/* Scan block by block. First and last block may be incomplete */
 	pfn = start_pfn;
@@ -1166,17 +1171,17 @@  isolate_migratepages_range(struct compact_control *cc, unsigned long start_pfn,
 					block_end_pfn, cc->zone))
 			continue;
 
-		pfn = isolate_migratepages_block(cc, pfn, block_end_pfn,
-							ISOLATE_UNEVICTABLE);
+		ret = isolate_migratepages_block(cc, pfn, block_end_pfn,
+						 ISOLATE_UNEVICTABLE);
 
-		if (!pfn)
+		if (ret)
 			break;
 
 		if (cc->nr_migratepages >= COMPACT_CLUSTER_MAX)
 			break;
 	}
 
-	return pfn;
+	return ret;
 }
 
 #endif /* CONFIG_COMPACTION || CONFIG_CMA */
@@ -1847,7 +1852,7 @@  static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
 	 */
 	for (; block_end_pfn <= cc->free_pfn;
 			fast_find_block = false,
-			low_pfn = block_end_pfn,
+			cc->migrate_pfn = low_pfn = block_end_pfn,
 			block_start_pfn = block_end_pfn,
 			block_end_pfn += pageblock_nr_pages) {
 
@@ -1889,10 +1894,8 @@  static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
 		}
 
 		/* Perform the isolation */
-		low_pfn = isolate_migratepages_block(cc, low_pfn,
-						block_end_pfn, isolate_mode);
-
-		if (!low_pfn)
+		if (isolate_migratepages_block(cc, low_pfn, block_end_pfn,
+						isolate_mode))
 			return ISOLATE_ABORT;
 
 		/*
@@ -1903,9 +1906,6 @@  static isolate_migrate_t isolate_migratepages(struct compact_control *cc)
 		break;
 	}
 
-	/* Record where migration scanner will be restarted. */
-	cc->migrate_pfn = low_pfn;
-
 	return cc->nr_migratepages ? ISOLATE_SUCCESS : ISOLATE_NONE;
 }
 
diff --git a/mm/internal.h b/mm/internal.h
index 1432feec62df..1f2ccba8e289 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -261,7 +261,7 @@  struct capture_control {
 unsigned long
 isolate_freepages_range(struct compact_control *cc,
 			unsigned long start_pfn, unsigned long end_pfn);
-unsigned long
+int
 isolate_migratepages_range(struct compact_control *cc,
 			   unsigned long low_pfn, unsigned long end_pfn);
 int find_suitable_fallback(struct free_area *area, unsigned int order,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a4f67063b85f..4cb455355f6d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8474,11 +8474,10 @@  static int __alloc_contig_migrate_range(struct compact_control *cc,
 
 		if (list_empty(&cc->migratepages)) {
 			cc->nr_migratepages = 0;
-			pfn = isolate_migratepages_range(cc, pfn, end);
-			if (!pfn) {
-				ret = -EINTR;
+			ret = isolate_migratepages_range(cc, pfn, end);
+			if (ret)
 				break;
-			}
+			pfn = cc->migrate_pfn;
 			tries = 0;
 		} else if (++tries == 5) {
 			ret = -EBUSY;