Message ID | 20230621212403.174710-1-mike.kravetz@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] Revert "page cache: fix page_cache_next/prev_miss off by one" | expand |
On 6/21/23 2:24 PM, Mike Kravetz wrote: > This reverts commit 9425c591e06a9ab27a145ba655fb50532cf0bcc9 > > The reverted commit fixed up routines primarily used by readahead code > such that they could also be used by hugetlb. Unfortunately, this > caused a performance regression as pointed out by the Closes: tag. > > The hugetlb code which uses page_cache_next_miss will be addressed in > a subsequent patch. > > Reported-by: kernel test robot <oliver.sang@intel.com> > Closes: https://lore.kernel.org/oe-lkp/202306211346.1e9ff03e-oliver.sang@intel.com > Fixes: 9425c591e06a ("page cache: fix page_cache_next/prev_miss off by one") > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> > --- > mm/filemap.c | 26 ++++++++++---------------- > 1 file changed, 10 insertions(+), 16 deletions(-) > > diff --git a/mm/filemap.c b/mm/filemap.c > index 3b73101f9f86..9e44a49bbd74 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -1728,9 +1728,7 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, > * > * Return: The index of the gap if found, otherwise an index outside the > * range specified (in which case 'return - index >= max_scan' will be true). > - * In the rare case of index wrap-around, 0 will be returned. 0 will also > - * be returned if index == 0 and there is a gap at the index. We can not > - * wrap-around if passed index == 0. > + * In the rare case of index wrap-around, 0 will be returned. > */ > pgoff_t page_cache_next_miss(struct address_space *mapping, > pgoff_t index, unsigned long max_scan) > @@ -1740,13 +1738,12 @@ pgoff_t page_cache_next_miss(struct address_space *mapping, > while (max_scan--) { > void *entry = xas_next(&xas); > if (!entry || xa_is_value(entry)) > - return xas.xa_index; > - if (xas.xa_index == 0 && index != 0) > - return xas.xa_index; > + break; > + if (xas.xa_index == 0) > + break; > } > > - /* No gaps in range and no wrap-around, return index beyond range */ > - return xas.xa_index + 1; > + return xas.xa_index; > } > EXPORT_SYMBOL(page_cache_next_miss); > > @@ -1767,9 +1764,7 @@ EXPORT_SYMBOL(page_cache_next_miss); > * > * Return: The index of the gap if found, otherwise an index outside the > * range specified (in which case 'index - return >= max_scan' will be true). > - * In the rare case of wrap-around, ULONG_MAX will be returned. ULONG_MAX > - * will also be returned if index == ULONG_MAX and there is a gap at the > - * index. We can not wrap-around if passed index == ULONG_MAX. > + * In the rare case of wrap-around, ULONG_MAX will be returned. > */ > pgoff_t page_cache_prev_miss(struct address_space *mapping, > pgoff_t index, unsigned long max_scan) > @@ -1779,13 +1774,12 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping, > while (max_scan--) { > void *entry = xas_prev(&xas); > if (!entry || xa_is_value(entry)) > - return xas.xa_index; > - if (xas.xa_index == ULONG_MAX && index != ULONG_MAX) > - return xas.xa_index; > + break; > + if (xas.xa_index == ULONG_MAX) > + break; > } > > - /* No gaps in range and no wrap-around, return index beyond range */ > - return xas.xa_index - 1; > + return xas.xa_index; > } > EXPORT_SYMBOL(page_cache_prev_miss); > Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
On Wed, 21 Jun 2023 14:24:02 -0700 Mike Kravetz <mike.kravetz@oracle.com> wrote: > This reverts commit 9425c591e06a9ab27a145ba655fb50532cf0bcc9 > > The reverted commit fixed up routines primarily used by readahead code > such that they could also be used by hugetlb. Unfortunately, this > caused a performance regression as pointed out by the Closes: tag. > > The hugetlb code which uses page_cache_next_miss will be addressed in > a subsequent patch. Often these throughput changes are caused by rather random alignment/layout changes and the code change itself was innocent. Do we have an explanation for this regression, or was it a surprise?
On 06/21/23 15:18, Andrew Morton wrote: > On Wed, 21 Jun 2023 14:24:02 -0700 Mike Kravetz <mike.kravetz@oracle.com> wrote: > > > This reverts commit 9425c591e06a9ab27a145ba655fb50532cf0bcc9 > > > > The reverted commit fixed up routines primarily used by readahead code > > such that they could also be used by hugetlb. Unfortunately, this > > caused a performance regression as pointed out by the Closes: tag. > > > > The hugetlb code which uses page_cache_next_miss will be addressed in > > a subsequent patch. > > Often these throughput changes are caused by rather random > alignment/layout changes and the code change itself was innocent. > > Do we have an explanation for this regression, or was it a surprise? It was not a total surprise. As mentioned, the primary user of this interface is the readahead code. The code in question is in ondemand_readahead. rcu_read_lock(); start = page_cache_next_miss(ractl->mapping, index + 1, max_pages); rcu_read_unlock(); if (!start || start - index > max_pages) return; With the reverted changes, we will take that quick return when there are no gaps in the range. Previously we did not. I am of the belief that page_cache_next_miss behavior did not match the function description. Matthew suggested page_cache_next_miss use in hugetlb code and I can only guess that he also though it behaved as documented. I do not know the readahead code well enough to know exactly what is expected. readahead certainly performs worse with my proposed changes. Since we can easily 'fix' hugetlb code in another way, let's do that and leave the readahead code alone unless someone more knowledgable can provide insight.
diff --git a/mm/filemap.c b/mm/filemap.c index 3b73101f9f86..9e44a49bbd74 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1728,9 +1728,7 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, * * Return: The index of the gap if found, otherwise an index outside the * range specified (in which case 'return - index >= max_scan' will be true). - * In the rare case of index wrap-around, 0 will be returned. 0 will also - * be returned if index == 0 and there is a gap at the index. We can not - * wrap-around if passed index == 0. + * In the rare case of index wrap-around, 0 will be returned. */ pgoff_t page_cache_next_miss(struct address_space *mapping, pgoff_t index, unsigned long max_scan) @@ -1740,13 +1738,12 @@ pgoff_t page_cache_next_miss(struct address_space *mapping, while (max_scan--) { void *entry = xas_next(&xas); if (!entry || xa_is_value(entry)) - return xas.xa_index; - if (xas.xa_index == 0 && index != 0) - return xas.xa_index; + break; + if (xas.xa_index == 0) + break; } - /* No gaps in range and no wrap-around, return index beyond range */ - return xas.xa_index + 1; + return xas.xa_index; } EXPORT_SYMBOL(page_cache_next_miss); @@ -1767,9 +1764,7 @@ EXPORT_SYMBOL(page_cache_next_miss); * * Return: The index of the gap if found, otherwise an index outside the * range specified (in which case 'index - return >= max_scan' will be true). - * In the rare case of wrap-around, ULONG_MAX will be returned. ULONG_MAX - * will also be returned if index == ULONG_MAX and there is a gap at the - * index. We can not wrap-around if passed index == ULONG_MAX. + * In the rare case of wrap-around, ULONG_MAX will be returned. */ pgoff_t page_cache_prev_miss(struct address_space *mapping, pgoff_t index, unsigned long max_scan) @@ -1779,13 +1774,12 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping, while (max_scan--) { void *entry = xas_prev(&xas); if (!entry || xa_is_value(entry)) - return xas.xa_index; - if (xas.xa_index == ULONG_MAX && index != ULONG_MAX) - return xas.xa_index; + break; + if (xas.xa_index == ULONG_MAX) + break; } - /* No gaps in range and no wrap-around, return index beyond range */ - return xas.xa_index - 1; + return xas.xa_index; } EXPORT_SYMBOL(page_cache_prev_miss);
This reverts commit 9425c591e06a9ab27a145ba655fb50532cf0bcc9 The reverted commit fixed up routines primarily used by readahead code such that they could also be used by hugetlb. Unfortunately, this caused a performance regression as pointed out by the Closes: tag. The hugetlb code which uses page_cache_next_miss will be addressed in a subsequent patch. Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202306211346.1e9ff03e-oliver.sang@intel.com Fixes: 9425c591e06a ("page cache: fix page_cache_next/prev_miss off by one") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> --- mm/filemap.c | 26 ++++++++++---------------- 1 file changed, 10 insertions(+), 16 deletions(-)