Message ID | 20220121075515.79311-1-songmuchun@bytedance.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/5] mm: rmap: fix cache flush on THP pages | expand |
On Thu, Jan 20, 2022 at 11:56 PM Muchun Song <songmuchun@bytedance.com> wrote: > > The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. > However, it does not cover the full pages in a THP except a head page. > Replace it with flush_cache_range() to fix this issue. At least, no > problems were found due to this. Maybe because the architectures that > have virtual indexed caches is less. Yeah, actually flush_cache_page()/flush_cache_range() are no-op for the most architectures which have THP supported, i.e. x86, aarch64, powerpc, etc. And currently just tmpfs and read-only files support PMD-mapped THP, but both don't have to do writeback. And it seems DAX doesn't have writeback either, which uses __set_page_dirty_no_writeback() for set_page_dirty. So this code should never be called IIUC. But anyway your fix looks correct to me. Reviewed-by: Yang Shi <shy828301@gmail.com> > > Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > --- > mm/rmap.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index b0fd9dc19eba..65670cb805d6 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -974,7 +974,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, > if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) > continue; > > - flush_cache_page(vma, address, page_to_pfn(page)); > + flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); > entry = pmdp_invalidate(vma, address, pmd); > entry = pmd_wrprotect(entry); > entry = pmd_mkclean(entry); > -- > 2.11.0 >
On Fri, Jan 21, 2022 at 03:55:11PM +0800, Muchun Song wrote: > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index b0fd9dc19eba..65670cb805d6 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -974,7 +974,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, > if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) > continue; > > - flush_cache_page(vma, address, page_to_pfn(page)); > + flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); Do we need a flush_cache_folio here given that we must be dealing with what effectively is a folio here? Also please avoid the overly long line.
On Mon, Jan 24, 2022 at 3:34 PM Christoph Hellwig <hch@infradead.org> wrote: > > On Fri, Jan 21, 2022 at 03:55:11PM +0800, Muchun Song wrote: > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/mm/rmap.c b/mm/rmap.c > > index b0fd9dc19eba..65670cb805d6 100644 > > --- a/mm/rmap.c > > +++ b/mm/rmap.c > > @@ -974,7 +974,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, > > if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) > > continue; > > > > - flush_cache_page(vma, address, page_to_pfn(page)); > > + flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); > > Do we need a flush_cache_folio here given that we must be dealing with > what effectively is a folio here? I think it is a future improvement. I suspect it will be easy if someone wants to backport this patch. If we do not want someone to do this, I think it is better to introduce flush_cache_folio in this patch. What do you think? > > Also please avoid the overly long line. > OK. Thanks.
diff --git a/mm/rmap.c b/mm/rmap.c index b0fd9dc19eba..65670cb805d6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -974,7 +974,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) continue; - flush_cache_page(vma, address, page_to_pfn(page)); + flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); entry = pmdp_invalidate(vma, address, pmd); entry = pmd_wrprotect(entry); entry = pmd_mkclean(entry);
The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. At least, no problems were found due to this. Maybe because the architectures that have virtual indexed caches is less. Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") Signed-off-by: Muchun Song <songmuchun@bytedance.com> --- mm/rmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)