Message ID | 56E2FB5C.1040602@suse.cz (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote: > On 03/11/2016 04:00 PM, Joonsoo Kim wrote: > > 2016-03-09 10:23 GMT+09:00 Leizhen (ThunderTown) <thunder.leizhen@huawei.com>: > >> > >> Hi, Joonsoo: > >> This new patch worked well. Do you plan to upstream it in the near furture? > > > > Of course! > > But, I should think more because it touches allocator's fastpatch and > > I'd like to detour. > > If I fail to think a better solution, I will send it as is, soon. > > How about something like this? Just and idea, probably buggy (off-by-one etc.). > Should keep away cost from <pageblock_order iterations at the expense of the > relatively fewer >pageblock_order iterations. Hmm... I tested this and found that it's code size is a little bit larger than mine. I'm not sure why this happens exactly but I guess it would be related to compiler optimization. In this case, I'm in favor of my implementation because it looks like well abstraction. It adds one unlikely branch to the merge loop but compiler would optimize it to check it once. Thanks. > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index ff1e3cbc8956..b8005a07b2a1 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -685,21 +685,13 @@ static inline void __free_one_page(struct page *page, > unsigned long combined_idx; > unsigned long uninitialized_var(buddy_idx); > struct page *buddy; > - unsigned int max_order = MAX_ORDER; > + unsigned int max_order = pageblock_order + 1; > > VM_BUG_ON(!zone_is_initialized(zone)); > VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); > > VM_BUG_ON(migratetype == -1); > - if (is_migrate_isolate(migratetype)) { > - /* > - * We restrict max order of merging to prevent merge > - * between freepages on isolate pageblock and normal > - * pageblock. Without this, pageblock isolation > - * could cause incorrect freepage accounting. > - */ > - max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1); > - } else { > + if (likely(!is_migrate_isolate(migratetype))) { > __mod_zone_freepage_state(zone, 1 << order, migratetype); > } > > @@ -708,11 +700,12 @@ static inline void __free_one_page(struct page *page, > VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page); > VM_BUG_ON_PAGE(bad_range(zone, page), page); > > +continue_merging: > while (order < max_order - 1) { > buddy_idx = __find_buddy_index(page_idx, order); > buddy = page + (buddy_idx - page_idx); > if (!page_is_buddy(page, buddy, order)) > - break; > + goto done_merging; > /* > * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page, > * merge with it and move up one order. > @@ -729,6 +722,26 @@ static inline void __free_one_page(struct page *page, > page_idx = combined_idx; > order++; > } > + if (max_order < MAX_ORDER) { > + if (IS_ENABLED(CONFIG_CMA) && > + unlikely(has_isolate_pageblock(zone))) { > + > + int buddy_mt; > + > + buddy_idx = __find_buddy_index(page_idx, order); > + buddy = page + (buddy_idx - page_idx); > + buddy_mt = get_pageblock_migratetype(buddy); > + > + if (migratetype != buddy_mt && > + (is_migrate_isolate(migratetype) || > + is_migrate_isolate(buddy_mt))) > + goto done_merging; > + } > + max_order++; > + goto continue_merging; > + } > + > +done_merging: > set_page_order(page, order); > > /* > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
On 03/14/2016 07:49 AM, Joonsoo Kim wrote: > On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote: >> On 03/11/2016 04:00 PM, Joonsoo Kim wrote: >> >> How about something like this? Just and idea, probably buggy (off-by-one etc.). >> Should keep away cost from <pageblock_order iterations at the expense of the >> relatively fewer >pageblock_order iterations. > > Hmm... I tested this and found that it's code size is a little bit > larger than mine. I'm not sure why this happens exactly but I guess it would be > related to compiler optimization. In this case, I'm in favor of my > implementation because it looks like well abstraction. It adds one > unlikely branch to the merge loop but compiler would optimize it to > check it once. I would be surprised if compiler optimized that to check it once, as order increases with each loop iteration. But maybe it's smart enough to do something like I did by hand? Guess I'll check the disassembly. > > Thanks. > >> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index ff1e3cbc8956..b8005a07b2a1 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -685,21 +685,13 @@ static inline void __free_one_page(struct page *page, >> unsigned long combined_idx; >> unsigned long uninitialized_var(buddy_idx); >> struct page *buddy; >> - unsigned int max_order = MAX_ORDER; >> + unsigned int max_order = pageblock_order + 1; >> >> VM_BUG_ON(!zone_is_initialized(zone)); >> VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); >> >> VM_BUG_ON(migratetype == -1); >> - if (is_migrate_isolate(migratetype)) { >> - /* >> - * We restrict max order of merging to prevent merge >> - * between freepages on isolate pageblock and normal >> - * pageblock. Without this, pageblock isolation >> - * could cause incorrect freepage accounting. >> - */ >> - max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1); >> - } else { >> + if (likely(!is_migrate_isolate(migratetype))) { >> __mod_zone_freepage_state(zone, 1 << order, migratetype); >> } >> >> @@ -708,11 +700,12 @@ static inline void __free_one_page(struct page *page, >> VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page); >> VM_BUG_ON_PAGE(bad_range(zone, page), page); >> >> +continue_merging: >> while (order < max_order - 1) { >> buddy_idx = __find_buddy_index(page_idx, order); >> buddy = page + (buddy_idx - page_idx); >> if (!page_is_buddy(page, buddy, order)) >> - break; >> + goto done_merging; >> /* >> * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page, >> * merge with it and move up one order. >> @@ -729,6 +722,26 @@ static inline void __free_one_page(struct page *page, >> page_idx = combined_idx; >> order++; >> } >> + if (max_order < MAX_ORDER) { >> + if (IS_ENABLED(CONFIG_CMA) && >> + unlikely(has_isolate_pageblock(zone))) { >> + >> + int buddy_mt; >> + >> + buddy_idx = __find_buddy_index(page_idx, order); >> + buddy = page + (buddy_idx - page_idx); >> + buddy_mt = get_pageblock_migratetype(buddy); >> + >> + if (migratetype != buddy_mt && >> + (is_migrate_isolate(migratetype) || >> + is_migrate_isolate(buddy_mt))) >> + goto done_merging; >> + } >> + max_order++; >> + goto continue_merging; >> + } >> + >> +done_merging: >> set_page_order(page, order); >> >> /* >> >> -- >> To unsubscribe, send a message with 'unsubscribe linux-mm' in >> the body to majordomo@kvack.org. For more info on Linux MM, >> see: http://www.linux-mm.org/ . >> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> >
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ff1e3cbc8956..b8005a07b2a1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -685,21 +685,13 @@ static inline void __free_one_page(struct page *page, unsigned long combined_idx; unsigned long uninitialized_var(buddy_idx); struct page *buddy; - unsigned int max_order = MAX_ORDER; + unsigned int max_order = pageblock_order + 1; VM_BUG_ON(!zone_is_initialized(zone)); VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); VM_BUG_ON(migratetype == -1); - if (is_migrate_isolate(migratetype)) { - /* - * We restrict max order of merging to prevent merge - * between freepages on isolate pageblock and normal - * pageblock. Without this, pageblock isolation - * could cause incorrect freepage accounting. - */ - max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1); - } else { + if (likely(!is_migrate_isolate(migratetype))) { __mod_zone_freepage_state(zone, 1 << order, migratetype); } @@ -708,11 +700,12 @@ static inline void __free_one_page(struct page *page, VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page); VM_BUG_ON_PAGE(bad_range(zone, page), page); +continue_merging: while (order < max_order - 1) { buddy_idx = __find_buddy_index(page_idx, order); buddy = page + (buddy_idx - page_idx); if (!page_is_buddy(page, buddy, order)) - break; + goto done_merging; /* * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page, * merge with it and move up one order. @@ -729,6 +722,26 @@ static inline void __free_one_page(struct page *page, page_idx = combined_idx; order++; } + if (max_order < MAX_ORDER) { + if (IS_ENABLED(CONFIG_CMA) && + unlikely(has_isolate_pageblock(zone))) { + + int buddy_mt; + + buddy_idx = __find_buddy_index(page_idx, order); + buddy = page + (buddy_idx - page_idx); + buddy_mt = get_pageblock_migratetype(buddy); + + if (migratetype != buddy_mt && + (is_migrate_isolate(migratetype) || + is_migrate_isolate(buddy_mt))) + goto done_merging; + } + max_order++; + goto continue_merging; + } + +done_merging: set_page_order(page, order); /*