Message ID | 20230825190436.55045-13-mike.kravetz@oracle.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Batch hugetlb vmemmap modification operations | expand |
Hi Mike, kernel test robot noticed the following build warnings: [auto build test WARNING on next-20230825] [cannot apply to akpm-mm/mm-everything v6.5-rc7 v6.5-rc6 v6.5-rc5 linus/master v6.5-rc7] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Mike-Kravetz/hugetlb-clear-flags-in-tail-pages-that-will-be-freed-individually/20230826-030805 base: next-20230825 patch link: https://lore.kernel.org/r/20230825190436.55045-13-mike.kravetz%40oracle.com patch subject: [PATCH 12/12] hugetlb: batch TLB flushes when restoring vmemmap config: s390-randconfig-001-20230826 (https://download.01.org/0day-ci/archive/20230826/202308261516.F6FBNktd-lkp@intel.com/config) compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a) reproduce: (https://download.01.org/0day-ci/archive/20230826/202308261516.F6FBNktd-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202308261516.F6FBNktd-lkp@intel.com/ All warnings (new ones prefixed by >>): >> mm/hugetlb_vmemmap.c:516:5: warning: no previous prototype for function '__hugetlb_vmemmap_restore' [-Wmissing-prototypes] 516 | int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, bool bulk) | ^ mm/hugetlb_vmemmap.c:516:1: note: declare 'static' if the function is not intended to be used outside of this translation unit 516 | int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, bool bulk) | ^ | static mm/hugetlb_vmemmap.c:567:28: error: use of undeclared identifier 'TLB_FLUSH_ALL' 567 | flush_tlb_kernel_range(0, TLB_FLUSH_ALL); | ^ mm/hugetlb_vmemmap.c:673:6: warning: no previous prototype for function 'hugetlb_vmemmap_optimize_bulk' [-Wmissing-prototypes] 673 | void hugetlb_vmemmap_optimize_bulk(const struct hstate *h, struct page *head, | ^ mm/hugetlb_vmemmap.c:673:1: note: declare 'static' if the function is not intended to be used outside of this translation unit 673 | void hugetlb_vmemmap_optimize_bulk(const struct hstate *h, struct page *head, | ^ | static mm/hugetlb_vmemmap.c:679:6: warning: no previous prototype for function 'hugetlb_vmemmap_split' [-Wmissing-prototypes] 679 | void hugetlb_vmemmap_split(const struct hstate *h, struct page *head) | ^ mm/hugetlb_vmemmap.c:679:1: note: declare 'static' if the function is not intended to be used outside of this translation unit 679 | void hugetlb_vmemmap_split(const struct hstate *h, struct page *head) | ^ | static mm/hugetlb_vmemmap.c:710:28: error: use of undeclared identifier 'TLB_FLUSH_ALL' 710 | flush_tlb_kernel_range(0, TLB_FLUSH_ALL); | ^ mm/hugetlb_vmemmap.c:715:28: error: use of undeclared identifier 'TLB_FLUSH_ALL' 715 | flush_tlb_kernel_range(0, TLB_FLUSH_ALL); | ^ 3 warnings and 3 errors generated. vim +/__hugetlb_vmemmap_restore +516 mm/hugetlb_vmemmap.c 515 > 516 int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, bool bulk) 517 { 518 int ret; 519 unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; 520 unsigned long vmemmap_reuse; 521 522 if (!HPageVmemmapOptimized(head)) 523 return 0; 524 525 vmemmap_end = vmemmap_start + hugetlb_vmemmap_size(h); 526 vmemmap_reuse = vmemmap_start; 527 vmemmap_start += HUGETLB_VMEMMAP_RESERVE_SIZE; 528 529 /* 530 * The pages which the vmemmap virtual address range [@vmemmap_start, 531 * @vmemmap_end) are mapped to are freed to the buddy allocator, and 532 * the range is mapped to the page which @vmemmap_reuse is mapped to. 533 * When a HugeTLB page is freed to the buddy allocator, previously 534 * discarded vmemmap pages must be allocated and remapping. 535 */ 536 ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, bulk); 537 if (!ret) { 538 ClearHPageVmemmapOptimized(head); 539 static_branch_dec(&hugetlb_optimize_vmemmap_key); 540 } 541 542 return ret; 543 } 544
On 2023/8/26 03:04, Mike Kravetz wrote: > Update the hugetlb_vmemmap_restore path to take a 'batch' parameter that > indicates restoration is happening on a batch of pages. When set, use > the existing mechanism (VMEMMAP_REMAP_BULK_PAGES) to delay TLB flushing. > The routine hugetlb_vmemmap_restore_folios is the only user of this new > batch parameter and it will perform a global flush after all vmemmap is > restored. > > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> > --- > mm/hugetlb_vmemmap.c | 37 +++++++++++++++++++++++-------------- > 1 file changed, 23 insertions(+), 14 deletions(-) > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index a2fc7b03ac6b..d6e7440b9507 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -479,17 +479,19 @@ static int alloc_vmemmap_page_list(unsigned long start, unsigned long end, > * @end: end address of the vmemmap virtual address range that we want to > * remap. > * @reuse: reuse address. > + * @bulk: bulk operation, batch TLB flushes > * > * Return: %0 on success, negative error code otherwise. > */ > static int vmemmap_remap_alloc(unsigned long start, unsigned long end, > - unsigned long reuse) > + unsigned long reuse, bool bulk) I'd like to let vmemmap_remap_alloc pass VMEMMAP_REMAP_BULK_PAGES directly, in which case, we do not need to change this function if we want to introduce another flag in the future. I mean that change "bool bulk" to "unsigned long flags". > { > LIST_HEAD(vmemmap_pages); > struct vmemmap_remap_walk walk = { > .remap_pte = vmemmap_restore_pte, > .reuse_addr = reuse, > .vmemmap_pages = &vmemmap_pages, > + .flags = !bulk ? 0 : VMEMMAP_REMAP_BULK_PAGES, > }; > > /* See the comment in the vmemmap_remap_free(). */ > @@ -511,17 +513,7 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); > static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); > core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); > > -/** > - * hugetlb_vmemmap_restore - restore previously optimized (by > - * hugetlb_vmemmap_optimize()) vmemmap pages which > - * will be reallocated and remapped. > - * @h: struct hstate. > - * @head: the head page whose vmemmap pages will be restored. > - * > - * Return: %0 if @head's vmemmap pages have been reallocated and remapped, > - * negative error code otherwise. > - */ > -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > +int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, bool bulk) The same as here. > { > int ret; > unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; > @@ -541,7 +533,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > * When a HugeTLB page is freed to the buddy allocator, previously > * discarded vmemmap pages must be allocated and remapping. > */ > - ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse); > + ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, bulk); > if (!ret) { > ClearHPageVmemmapOptimized(head); > static_branch_dec(&hugetlb_optimize_vmemmap_key); > @@ -550,12 +542,29 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > return ret; > } > > +/** > + * hugetlb_vmemmap_restore - restore previously optimized (by > + * hugetlb_vmemmap_optimize()) vmemmap pages which > + * will be reallocated and remapped. > + * @h: struct hstate. > + * @head: the head page whose vmemmap pages will be restored. > + * > + * Return: %0 if @head's vmemmap pages have been reallocated and remapped, > + * negative error code otherwise. > + */ > +int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > +{ > + return __hugetlb_vmemmap_restore(h, head, false); > +} > + > void hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list) > { > struct folio *folio; > > list_for_each_entry(folio, folio_list, lru) > - hugetlb_vmemmap_restore(h, &folio->page); > + (void)__hugetlb_vmemmap_restore(h, &folio->page, true); Pass VMEMMAP_REMAP_BULK_PAGES directly here. Thanks. > + > + flush_tlb_kernel_range(0, TLB_FLUSH_ALL); > } > > /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index a2fc7b03ac6b..d6e7440b9507 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -479,17 +479,19 @@ static int alloc_vmemmap_page_list(unsigned long start, unsigned long end, * @end: end address of the vmemmap virtual address range that we want to * remap. * @reuse: reuse address. + * @bulk: bulk operation, batch TLB flushes * * Return: %0 on success, negative error code otherwise. */ static int vmemmap_remap_alloc(unsigned long start, unsigned long end, - unsigned long reuse) + unsigned long reuse, bool bulk) { LIST_HEAD(vmemmap_pages); struct vmemmap_remap_walk walk = { .remap_pte = vmemmap_restore_pte, .reuse_addr = reuse, .vmemmap_pages = &vmemmap_pages, + .flags = !bulk ? 0 : VMEMMAP_REMAP_BULK_PAGES, }; /* See the comment in the vmemmap_remap_free(). */ @@ -511,17 +513,7 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); -/** - * hugetlb_vmemmap_restore - restore previously optimized (by - * hugetlb_vmemmap_optimize()) vmemmap pages which - * will be reallocated and remapped. - * @h: struct hstate. - * @head: the head page whose vmemmap pages will be restored. - * - * Return: %0 if @head's vmemmap pages have been reallocated and remapped, - * negative error code otherwise. - */ -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) +int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, bool bulk) { int ret; unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; @@ -541,7 +533,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) * When a HugeTLB page is freed to the buddy allocator, previously * discarded vmemmap pages must be allocated and remapping. */ - ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse); + ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, bulk); if (!ret) { ClearHPageVmemmapOptimized(head); static_branch_dec(&hugetlb_optimize_vmemmap_key); @@ -550,12 +542,29 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) return ret; } +/** + * hugetlb_vmemmap_restore - restore previously optimized (by + * hugetlb_vmemmap_optimize()) vmemmap pages which + * will be reallocated and remapped. + * @h: struct hstate. + * @head: the head page whose vmemmap pages will be restored. + * + * Return: %0 if @head's vmemmap pages have been reallocated and remapped, + * negative error code otherwise. + */ +int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) +{ + return __hugetlb_vmemmap_restore(h, head, false); +} + void hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list) { struct folio *folio; list_for_each_entry(folio, folio_list, lru) - hugetlb_vmemmap_restore(h, &folio->page); + (void)__hugetlb_vmemmap_restore(h, &folio->page, true); + + flush_tlb_kernel_range(0, TLB_FLUSH_ALL); } /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */
Update the hugetlb_vmemmap_restore path to take a 'batch' parameter that indicates restoration is happening on a batch of pages. When set, use the existing mechanism (VMEMMAP_REMAP_BULK_PAGES) to delay TLB flushing. The routine hugetlb_vmemmap_restore_folios is the only user of this new batch parameter and it will perform a global flush after all vmemmap is restored. Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> --- mm/hugetlb_vmemmap.c | 37 +++++++++++++++++++++++-------------- 1 file changed, 23 insertions(+), 14 deletions(-)