Message ID | 20240828202240.2809740-5-ziy@nvidia.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Make MIGRATE_ISOLATE a standalone bit | expand |
On 28.08.24 22:22, Zi Yan wrote: > migratetype is no longer overwritten during pageblock isolation, > start_isolate_page_range(), has_unmovable_pages(), and > set_migratetype_isolate() no longer need which migratetype to restore > during isolation failure. For has_unmoable_pages(), it needs to know if > the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation > flags to provide the information. > > Signed-off-by: Zi Yan <ziy@nvidia.com> > --- > include/linux/page-isolation.h | 3 ++- > mm/memory_hotplug.c | 1 - > mm/page_alloc.c | 4 +++- > mm/page_isolation.c | 27 +++++++++++---------------- > 4 files changed, 16 insertions(+), 19 deletions(-) > > diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h > index c2a1bd621561..e94117101b6c 100644 > --- a/include/linux/page-isolation.h > +++ b/include/linux/page-isolation.h > @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype) > > #define MEMORY_OFFLINE 0x1 > #define REPORT_FAILURE 0x2 > +#define CMA_ALLOCATION 0x4 > > void set_pageblock_migratetype(struct page *page, int migratetype); > > bool move_freepages_block_isolate(struct zone *zone, struct page *page); > > int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, > - int migratetype, int flags, gfp_t gfp_flags); > + int flags, gfp_t gfp_flags); > > void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn); > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 4265272faf4c..fe0b71e0f307 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages, > > /* set above range as isolated */ > ret = start_isolate_page_range(start_pfn, end_pfn, > - MIGRATE_MOVABLE, > MEMORY_OFFLINE | REPORT_FAILURE, > GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL); > if (ret) { > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 4d06932ba69a..c60bb95d7e65 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end, > * put back to page allocator so that buddy can use them. > */ > > - ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask); > + ret = start_isolate_page_range(start, end, > + migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0, Can we have flags for alloc_contig_range() instead of passing in a (weird) migratetype? Then, we should make sure that we warn if we try a CMA allocation on any pageblock that is not of type CMA. I'm trying to remember what happens if we try offlining a memory region that is of type MIGRATE_CMA right now ... I remember that we fail it. We should make sure that is still the case, otherwise we could seriously break something.
On 2 Sep 2024, at 5:06, David Hildenbrand wrote: > On 28.08.24 22:22, Zi Yan wrote: >> migratetype is no longer overwritten during pageblock isolation, >> start_isolate_page_range(), has_unmovable_pages(), and >> set_migratetype_isolate() no longer need which migratetype to restore >> during isolation failure. For has_unmoable_pages(), it needs to know if >> the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation >> flags to provide the information. >> >> Signed-off-by: Zi Yan <ziy@nvidia.com> >> --- >> include/linux/page-isolation.h | 3 ++- >> mm/memory_hotplug.c | 1 - >> mm/page_alloc.c | 4 +++- >> mm/page_isolation.c | 27 +++++++++++---------------- >> 4 files changed, 16 insertions(+), 19 deletions(-) >> >> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h >> index c2a1bd621561..e94117101b6c 100644 >> --- a/include/linux/page-isolation.h >> +++ b/include/linux/page-isolation.h >> @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype) >> #define MEMORY_OFFLINE 0x1 >> #define REPORT_FAILURE 0x2 >> +#define CMA_ALLOCATION 0x4 >> void set_pageblock_migratetype(struct page *page, int migratetype); >> bool move_freepages_block_isolate(struct zone *zone, struct page *page); >> int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, >> - int migratetype, int flags, gfp_t gfp_flags); >> + int flags, gfp_t gfp_flags); >> void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn); >> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >> index 4265272faf4c..fe0b71e0f307 100644 >> --- a/mm/memory_hotplug.c >> +++ b/mm/memory_hotplug.c >> @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages, >> /* set above range as isolated */ >> ret = start_isolate_page_range(start_pfn, end_pfn, >> - MIGRATE_MOVABLE, >> MEMORY_OFFLINE | REPORT_FAILURE, >> GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL); >> if (ret) { >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 4d06932ba69a..c60bb95d7e65 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end, >> * put back to page allocator so that buddy can use them. >> */ >> - ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask); >> + ret = start_isolate_page_range(start, end, >> + migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0, > > Can we have flags for alloc_contig_range() instead of passing in a (weird) migratetype? > > Then, we should make sure that we warn if we try a CMA allocation on any pageblock that is not of type CMA. Sure. I will expose the existing isolation flags (MEMORY_OFFLINE, REPORT_FAILURE, and CMA_ALLOCATION) as alloc_contig_range() parameter to replace migratetype one. > > I'm trying to remember what happens if we try offlining a memory region that is of type MIGRATE_CMA right now ... I remember that we fail it. We should make sure that is still the case, otherwise we could seriously break something. Yes, that fails. That is why I added CMA_ALLOCATION, which is used in has_unmovable_pages() for this situation. -- Best Regards, Yan, Zi
On 02.09.24 17:34, Zi Yan wrote: > On 2 Sep 2024, at 5:06, David Hildenbrand wrote: > >> On 28.08.24 22:22, Zi Yan wrote: >>> migratetype is no longer overwritten during pageblock isolation, >>> start_isolate_page_range(), has_unmovable_pages(), and >>> set_migratetype_isolate() no longer need which migratetype to restore >>> during isolation failure. For has_unmoable_pages(), it needs to know if >>> the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation >>> flags to provide the information. >>> >>> Signed-off-by: Zi Yan <ziy@nvidia.com> >>> --- >>> include/linux/page-isolation.h | 3 ++- >>> mm/memory_hotplug.c | 1 - >>> mm/page_alloc.c | 4 +++- >>> mm/page_isolation.c | 27 +++++++++++---------------- >>> 4 files changed, 16 insertions(+), 19 deletions(-) >>> >>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h >>> index c2a1bd621561..e94117101b6c 100644 >>> --- a/include/linux/page-isolation.h >>> +++ b/include/linux/page-isolation.h >>> @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype) >>> #define MEMORY_OFFLINE 0x1 >>> #define REPORT_FAILURE 0x2 >>> +#define CMA_ALLOCATION 0x4 >>> void set_pageblock_migratetype(struct page *page, int migratetype); >>> bool move_freepages_block_isolate(struct zone *zone, struct page *page); >>> int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, >>> - int migratetype, int flags, gfp_t gfp_flags); >>> + int flags, gfp_t gfp_flags); >>> void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn); >>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >>> index 4265272faf4c..fe0b71e0f307 100644 >>> --- a/mm/memory_hotplug.c >>> +++ b/mm/memory_hotplug.c >>> @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages, >>> /* set above range as isolated */ >>> ret = start_isolate_page_range(start_pfn, end_pfn, >>> - MIGRATE_MOVABLE, >>> MEMORY_OFFLINE | REPORT_FAILURE, >>> GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL); >>> if (ret) { >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>> index 4d06932ba69a..c60bb95d7e65 100644 >>> --- a/mm/page_alloc.c >>> +++ b/mm/page_alloc.c >>> @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end, >>> * put back to page allocator so that buddy can use them. >>> */ >>> - ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask); >>> + ret = start_isolate_page_range(start, end, >>> + migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0, >> >> Can we have flags for alloc_contig_range() instead of passing in a (weird) migratetype? >> >> Then, we should make sure that we warn if we try a CMA allocation on any pageblock that is not of type CMA. > > Sure. I will expose the existing isolation flags (MEMORY_OFFLINE, REPORT_FAILURE, > and CMA_ALLOCATION) as alloc_contig_range() parameter to replace migratetype one. > Maybe we want some proper, distinct alloc_contig_range() falgs "acr_flags_t". Might be cleanest, to express anything that doesn't fall into the gfp_t flag category. Exposing MEMORY_OFFLINE feels wrong, for example. >> >> I'm trying to remember what happens if we try offlining a memory region that is of type MIGRATE_CMA right now ... I remember that we fail it. We should make sure that is still the case, otherwise we could seriously break something. > > Yes, that fails. That is why I added CMA_ALLOCATION, which is used in > has_unmovable_pages() for this situation. Ah, okay I stumbled over that but wasn't sure if it gets the job done. thanks!
On 2 Sep 2024, at 12:42, David Hildenbrand wrote: > On 02.09.24 17:34, Zi Yan wrote: >> On 2 Sep 2024, at 5:06, David Hildenbrand wrote: >> >>> On 28.08.24 22:22, Zi Yan wrote: >>>> migratetype is no longer overwritten during pageblock isolation, >>>> start_isolate_page_range(), has_unmovable_pages(), and >>>> set_migratetype_isolate() no longer need which migratetype to restore >>>> during isolation failure. For has_unmoable_pages(), it needs to know if >>>> the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation >>>> flags to provide the information. >>>> >>>> Signed-off-by: Zi Yan <ziy@nvidia.com> >>>> --- >>>> include/linux/page-isolation.h | 3 ++- >>>> mm/memory_hotplug.c | 1 - >>>> mm/page_alloc.c | 4 +++- >>>> mm/page_isolation.c | 27 +++++++++++---------------- >>>> 4 files changed, 16 insertions(+), 19 deletions(-) >>>> >>>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h >>>> index c2a1bd621561..e94117101b6c 100644 >>>> --- a/include/linux/page-isolation.h >>>> +++ b/include/linux/page-isolation.h >>>> @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype) >>>> #define MEMORY_OFFLINE 0x1 >>>> #define REPORT_FAILURE 0x2 >>>> +#define CMA_ALLOCATION 0x4 >>>> void set_pageblock_migratetype(struct page *page, int migratetype); >>>> bool move_freepages_block_isolate(struct zone *zone, struct page *page); >>>> int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, >>>> - int migratetype, int flags, gfp_t gfp_flags); >>>> + int flags, gfp_t gfp_flags); >>>> void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn); >>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >>>> index 4265272faf4c..fe0b71e0f307 100644 >>>> --- a/mm/memory_hotplug.c >>>> +++ b/mm/memory_hotplug.c >>>> @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages, >>>> /* set above range as isolated */ >>>> ret = start_isolate_page_range(start_pfn, end_pfn, >>>> - MIGRATE_MOVABLE, >>>> MEMORY_OFFLINE | REPORT_FAILURE, >>>> GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL); >>>> if (ret) { >>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>>> index 4d06932ba69a..c60bb95d7e65 100644 >>>> --- a/mm/page_alloc.c >>>> +++ b/mm/page_alloc.c >>>> @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end, >>>> * put back to page allocator so that buddy can use them. >>>> */ >>>> - ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask); >>>> + ret = start_isolate_page_range(start, end, >>>> + migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0, >>> >>> Can we have flags for alloc_contig_range() instead of passing in a (weird) migratetype? >>> >>> Then, we should make sure that we warn if we try a CMA allocation on any pageblock that is not of type CMA. >> >> Sure. I will expose the existing isolation flags (MEMORY_OFFLINE, REPORT_FAILURE, >> and CMA_ALLOCATION) as alloc_contig_range() parameter to replace migratetype one. >> > > Maybe we want some proper, distinct alloc_contig_range() falgs "acr_flags_t". Might be cleanest, to express anything that doesn't fall into the gfp_t flag category. > > Exposing MEMORY_OFFLINE feels wrong, for example. OK, it seems that I mixed up of start_isolate_page_range() flags with alloc_contig_range() flags. Let me clarify them below. For start_isolate_page_range(), memory offline calls it separately and needs MEMORY_OFFLINE and REPORT_FAILURE; CMA allocation uses it via alloc_contig_range() and needs a flag (like CMA_ALLOCATION) for its own checks. BTW, it seems to me that MEMORY_OFFLINE and REPORT_FAILURE can be merged, since they are always used together. Let me know if you disagree. For alloc_contig_range(), migratetype parameter is what you are talking about above. There are two callers: cma_alloc() and alloc_contig_pages(). The acr_flags_t is basically a caller id. Something like? enum acr_flags_t { ACR_CMA_ALLOC, ACR_CONTIG_PAGES, }; And ACR_CMA_ALLOC needs to be translated to CMA_ALLOCATION when start_isolate_page_range() is called. BTW, after removing migratetype parameter from alloc_contig_range(), the tracepoint in __alloc_contig_migrate_range() needs to be changed to use acr_flags_t, since I do not think we want to convert acr_flags_t back to migratetype. Best Regards, Yan, Zi
On 04.09.24 04:02, Zi Yan wrote: > On 2 Sep 2024, at 12:42, David Hildenbrand wrote: > >> On 02.09.24 17:34, Zi Yan wrote: >>> On 2 Sep 2024, at 5:06, David Hildenbrand wrote: >>> >>>> On 28.08.24 22:22, Zi Yan wrote: >>>>> migratetype is no longer overwritten during pageblock isolation, >>>>> start_isolate_page_range(), has_unmovable_pages(), and >>>>> set_migratetype_isolate() no longer need which migratetype to restore >>>>> during isolation failure. For has_unmoable_pages(), it needs to know if >>>>> the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation >>>>> flags to provide the information. >>>>> >>>>> Signed-off-by: Zi Yan <ziy@nvidia.com> >>>>> --- >>>>> include/linux/page-isolation.h | 3 ++- >>>>> mm/memory_hotplug.c | 1 - >>>>> mm/page_alloc.c | 4 +++- >>>>> mm/page_isolation.c | 27 +++++++++++---------------- >>>>> 4 files changed, 16 insertions(+), 19 deletions(-) >>>>> >>>>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h >>>>> index c2a1bd621561..e94117101b6c 100644 >>>>> --- a/include/linux/page-isolation.h >>>>> +++ b/include/linux/page-isolation.h >>>>> @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype) >>>>> #define MEMORY_OFFLINE 0x1 >>>>> #define REPORT_FAILURE 0x2 >>>>> +#define CMA_ALLOCATION 0x4 >>>>> void set_pageblock_migratetype(struct page *page, int migratetype); >>>>> bool move_freepages_block_isolate(struct zone *zone, struct page *page); >>>>> int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, >>>>> - int migratetype, int flags, gfp_t gfp_flags); >>>>> + int flags, gfp_t gfp_flags); >>>>> void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn); >>>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >>>>> index 4265272faf4c..fe0b71e0f307 100644 >>>>> --- a/mm/memory_hotplug.c >>>>> +++ b/mm/memory_hotplug.c >>>>> @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages, >>>>> /* set above range as isolated */ >>>>> ret = start_isolate_page_range(start_pfn, end_pfn, >>>>> - MIGRATE_MOVABLE, >>>>> MEMORY_OFFLINE | REPORT_FAILURE, >>>>> GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL); >>>>> if (ret) { >>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>>>> index 4d06932ba69a..c60bb95d7e65 100644 >>>>> --- a/mm/page_alloc.c >>>>> +++ b/mm/page_alloc.c >>>>> @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end, >>>>> * put back to page allocator so that buddy can use them. >>>>> */ >>>>> - ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask); >>>>> + ret = start_isolate_page_range(start, end, >>>>> + migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0, >>>> >>>> Can we have flags for alloc_contig_range() instead of passing in a (weird) migratetype? >>>> >>>> Then, we should make sure that we warn if we try a CMA allocation on any pageblock that is not of type CMA. >>> >>> Sure. I will expose the existing isolation flags (MEMORY_OFFLINE, REPORT_FAILURE, >>> and CMA_ALLOCATION) as alloc_contig_range() parameter to replace migratetype one. >>> >> >> Maybe we want some proper, distinct alloc_contig_range() falgs "acr_flags_t". Might be cleanest, to express anything that doesn't fall into the gfp_t flag category. >> >> Exposing MEMORY_OFFLINE feels wrong, for example. > > OK, it seems that I mixed up of start_isolate_page_range() flags with > alloc_contig_range() flags. Let me clarify them below. > > For start_isolate_page_range(), memory offline calls it separately and > needs MEMORY_OFFLINE and REPORT_FAILURE; CMA allocation uses it via > alloc_contig_range() and needs a flag (like CMA_ALLOCATION) for its > own checks. > > BTW, it seems to me that MEMORY_OFFLINE and REPORT_FAILURE can be merged, > since they are always used together. Let me know if you disagree. I think there was a discussion about possibly using REPORT_FAILURE in other cases, but I agree that we might just merge them at this point. > > For alloc_contig_range(), migratetype parameter is what you are talking about > above. There are two callers: cma_alloc() and alloc_contig_pages(). > The acr_flags_t is basically a caller id. Something like? > enum acr_flags_t { > ACR_CMA_ALLOC, > ACR_CONTIG_PAGES, > }; I'd do something like: typedef unsigned int __bitwise acr_flags_t; #define ACR_CMA ((__force acr_flags_t)BIT(0)) No need for "ACR_CONTIG_PAGES", it's implicit if the CMA flag is not set. > > And ACR_CMA_ALLOC needs to be translated to CMA_ALLOCATION when > start_isolate_page_range() is called. Yes. > > BTW, after removing migratetype parameter from alloc_contig_range(), > the tracepoint in __alloc_contig_migrate_range() needs to be changed to > use acr_flags_t, since I do not think we want to convert acr_flags_t > back to migratetype. Sure, feel free to modify these tracepoints as it suits you.
On 4 Sep 2024, at 4:50, David Hildenbrand wrote: > On 04.09.24 04:02, Zi Yan wrote: >> On 2 Sep 2024, at 12:42, David Hildenbrand wrote: >> >>> On 02.09.24 17:34, Zi Yan wrote: >>>> On 2 Sep 2024, at 5:06, David Hildenbrand wrote: >>>> >>>>> On 28.08.24 22:22, Zi Yan wrote: >>>>>> migratetype is no longer overwritten during pageblock isolation, >>>>>> start_isolate_page_range(), has_unmovable_pages(), and >>>>>> set_migratetype_isolate() no longer need which migratetype to restore >>>>>> during isolation failure. For has_unmoable_pages(), it needs to know if >>>>>> the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation >>>>>> flags to provide the information. >>>>>> >>>>>> Signed-off-by: Zi Yan <ziy@nvidia.com> >>>>>> --- >>>>>> include/linux/page-isolation.h | 3 ++- >>>>>> mm/memory_hotplug.c | 1 - >>>>>> mm/page_alloc.c | 4 +++- >>>>>> mm/page_isolation.c | 27 +++++++++++---------------- >>>>>> 4 files changed, 16 insertions(+), 19 deletions(-) >>>>>> >>>>>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h >>>>>> index c2a1bd621561..e94117101b6c 100644 >>>>>> --- a/include/linux/page-isolation.h >>>>>> +++ b/include/linux/page-isolation.h >>>>>> @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype) >>>>>> #define MEMORY_OFFLINE 0x1 >>>>>> #define REPORT_FAILURE 0x2 >>>>>> +#define CMA_ALLOCATION 0x4 >>>>>> void set_pageblock_migratetype(struct page *page, int migratetype); >>>>>> bool move_freepages_block_isolate(struct zone *zone, struct page *page); >>>>>> int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, >>>>>> - int migratetype, int flags, gfp_t gfp_flags); >>>>>> + int flags, gfp_t gfp_flags); >>>>>> void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn); >>>>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >>>>>> index 4265272faf4c..fe0b71e0f307 100644 >>>>>> --- a/mm/memory_hotplug.c >>>>>> +++ b/mm/memory_hotplug.c >>>>>> @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages, >>>>>> /* set above range as isolated */ >>>>>> ret = start_isolate_page_range(start_pfn, end_pfn, >>>>>> - MIGRATE_MOVABLE, >>>>>> MEMORY_OFFLINE | REPORT_FAILURE, >>>>>> GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL); >>>>>> if (ret) { >>>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>>>>> index 4d06932ba69a..c60bb95d7e65 100644 >>>>>> --- a/mm/page_alloc.c >>>>>> +++ b/mm/page_alloc.c >>>>>> @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end, >>>>>> * put back to page allocator so that buddy can use them. >>>>>> */ >>>>>> - ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask); >>>>>> + ret = start_isolate_page_range(start, end, >>>>>> + migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0, >>>>> >>>>> Can we have flags for alloc_contig_range() instead of passing in a (weird) migratetype? >>>>> >>>>> Then, we should make sure that we warn if we try a CMA allocation on any pageblock that is not of type CMA. >>>> >>>> Sure. I will expose the existing isolation flags (MEMORY_OFFLINE, REPORT_FAILURE, >>>> and CMA_ALLOCATION) as alloc_contig_range() parameter to replace migratetype one. >>>> >>> >>> Maybe we want some proper, distinct alloc_contig_range() falgs "acr_flags_t". Might be cleanest, to express anything that doesn't fall into the gfp_t flag category. >>> >>> Exposing MEMORY_OFFLINE feels wrong, for example. >> >> OK, it seems that I mixed up of start_isolate_page_range() flags with >> alloc_contig_range() flags. Let me clarify them below. >> >> For start_isolate_page_range(), memory offline calls it separately and >> needs MEMORY_OFFLINE and REPORT_FAILURE; CMA allocation uses it via >> alloc_contig_range() and needs a flag (like CMA_ALLOCATION) for its >> own checks. >> >> BTW, it seems to me that MEMORY_OFFLINE and REPORT_FAILURE can be merged, >> since they are always used together. Let me know if you disagree. > > I think there was a discussion about possibly using REPORT_FAILURE in other cases, but I agree that we might just merge them at this point. > >> >> For alloc_contig_range(), migratetype parameter is what you are talking about >> above. There are two callers: cma_alloc() and alloc_contig_pages(). >> The acr_flags_t is basically a caller id. Something like? >> enum acr_flags_t { >> ACR_CMA_ALLOC, >> ACR_CONTIG_PAGES, >> }; > > I'd do something like: > > typedef unsigned int __bitwise acr_flags_t; > > #define ACR_CMA ((__force acr_flags_t)BIT(0)) > > No need for "ACR_CONTIG_PAGES", it's implicit if the CMA flag is not set. Got it. Will use this in the next version. > > >> >> And ACR_CMA_ALLOC needs to be translated to CMA_ALLOCATION when >> start_isolate_page_range() is called. > > Yes. > >> >> BTW, after removing migratetype parameter from alloc_contig_range(), >> the tracepoint in __alloc_contig_migrate_range() needs to be changed to >> use acr_flags_t, since I do not think we want to convert acr_flags_t >> back to migratetype. > > Sure, feel free to modify these tracepoints as it suits you. Thanks. :) -- Best Regards, Yan, Zi
diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index c2a1bd621561..e94117101b6c 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -32,13 +32,14 @@ static inline bool is_migrate_isolate(int migratetype) #define MEMORY_OFFLINE 0x1 #define REPORT_FAILURE 0x2 +#define CMA_ALLOCATION 0x4 void set_pageblock_migratetype(struct page *page, int migratetype); bool move_freepages_block_isolate(struct zone *zone, struct page *page); int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, - int migratetype, int flags, gfp_t gfp_flags); + int flags, gfp_t gfp_flags); void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 4265272faf4c..fe0b71e0f307 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1993,7 +1993,6 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages, /* set above range as isolated */ ret = start_isolate_page_range(start_pfn, end_pfn, - MIGRATE_MOVABLE, MEMORY_OFFLINE | REPORT_FAILURE, GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL); if (ret) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4d06932ba69a..c60bb95d7e65 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6607,7 +6607,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end, * put back to page allocator so that buddy can use them. */ - ret = start_isolate_page_range(start, end, migratetype, 0, gfp_mask); + ret = start_isolate_page_range(start, end, + migratetype == MIGRATE_CMA ? CMA_ALLOCATION : 0, + gfp_mask); if (ret) goto done; diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 4c65157d78ef..07c58b82db76 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -31,7 +31,7 @@ * */ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long end_pfn, - int migratetype, int flags) + int flags) { struct page *page = pfn_to_page(start_pfn); struct zone *zone = page_zone(page); @@ -46,7 +46,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e * isolate CMA pageblocks even when they are not movable in fact * so consider them movable here. */ - if (is_migrate_cma(migratetype)) + if (flags & CMA_ALLOCATION) return NULL; return page; @@ -144,7 +144,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e * present in [start_pfn, end_pfn). The pageblock must intersect with * [start_pfn, end_pfn). */ -static int set_migratetype_isolate(struct page *page, int migratetype, int isol_flags, +static int set_migratetype_isolate(struct page *page, int isol_flags, unsigned long start_pfn, unsigned long end_pfn) { struct zone *zone = page_zone(page); @@ -179,7 +179,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ end_pfn); unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end, - migratetype, isol_flags); + isol_flags); if (!unmovable) { if (!move_freepages_block_isolate(zone, page)) { spin_unlock_irqrestore(&zone->lock, flags); @@ -290,7 +290,6 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages) * @isolate_before: isolate the pageblock before the boundary_pfn * @skip_isolation: the flag to skip the pageblock isolation in second * isolate_single_pageblock() - * @migratetype: migrate type to set in error recovery. * * Free and in-use pages can be as big as MAX_PAGE_ORDER and contain more than one * pageblock. When not all pageblocks within a page are isolated at the same @@ -306,8 +305,7 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages) * the in-use page then splitting the free page. */ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, - gfp_t gfp_flags, bool isolate_before, bool skip_isolation, - int migratetype) + gfp_t gfp_flags, bool isolate_before, bool skip_isolation) { unsigned long start_pfn; unsigned long isolate_pageblock; @@ -333,11 +331,9 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, zone->zone_start_pfn); if (skip_isolation) { - int mt __maybe_unused = get_pageblock_migratetype(pfn_to_page(isolate_pageblock)); - - VM_BUG_ON(!is_migrate_isolate(mt)); + VM_BUG_ON(!get_pageblock_isolate(pfn_to_page(isolate_pageblock))); } else { - ret = set_migratetype_isolate(pfn_to_page(isolate_pageblock), migratetype, + ret = set_migratetype_isolate(pfn_to_page(isolate_pageblock), flags, isolate_pageblock, isolate_pageblock + pageblock_nr_pages); if (ret) @@ -436,7 +432,6 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, * start_isolate_page_range() - mark page range MIGRATE_ISOLATE * @start_pfn: The first PFN of the range to be isolated. * @end_pfn: The last PFN of the range to be isolated. - * @migratetype: Migrate type to set in error recovery. * @flags: The following flags are allowed (they can be combined in * a bit mask) * MEMORY_OFFLINE - isolate to offline (!allocate) memory @@ -478,7 +473,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, * Return: 0 on success and -EBUSY if any part of range cannot be isolated. */ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, - int migratetype, int flags, gfp_t gfp_flags) + int flags, gfp_t gfp_flags) { unsigned long pfn; struct page *page; @@ -490,7 +485,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, /* isolate [isolate_start, isolate_start + pageblock_nr_pages) pageblock */ ret = isolate_single_pageblock(isolate_start, flags, gfp_flags, false, - skip_isolation, migratetype); + skip_isolation); if (ret) return ret; @@ -499,7 +494,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, /* isolate [isolate_end - pageblock_nr_pages, isolate_end) pageblock */ ret = isolate_single_pageblock(isolate_end, flags, gfp_flags, true, - skip_isolation, migratetype); + skip_isolation); if (ret) { unset_migratetype_isolate(pfn_to_page(isolate_start)); return ret; @@ -510,7 +505,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, pfn < isolate_end - pageblock_nr_pages; pfn += pageblock_nr_pages) { page = __first_valid_page(pfn, pageblock_nr_pages); - if (page && set_migratetype_isolate(page, migratetype, flags, + if (page && set_migratetype_isolate(page, flags, start_pfn, end_pfn)) { undo_isolate_page_range(isolate_start, pfn); unset_migratetype_isolate(
migratetype is no longer overwritten during pageblock isolation, start_isolate_page_range(), has_unmovable_pages(), and set_migratetype_isolate() no longer need which migratetype to restore during isolation failure. For has_unmoable_pages(), it needs to know if the isolation is for CMA allocation, so adding CMA_ALLOCATION to isolation flags to provide the information. Signed-off-by: Zi Yan <ziy@nvidia.com> --- include/linux/page-isolation.h | 3 ++- mm/memory_hotplug.c | 1 - mm/page_alloc.c | 4 +++- mm/page_isolation.c | 27 +++++++++++---------------- 4 files changed, 16 insertions(+), 19 deletions(-)