Message ID | 20250210193801.781278-15-david@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: fixes for device-exclusive entries (hmm) | expand |
On Mon, 10 Feb 2025 20:37:56 +0100 David Hildenbrand <david@redhat.com> wrote: > Ever since commit b756a3b5e7ea ("mm: device exclusive memory access") > we can return with a device-exclusive entry from page_vma_mapped_walk(). > > damon_folio_young_one() is not prepared for that, so teach it about these > PFN swap PTEs. Note that device-private entries are so far not applicable > on that path, as we expect ZONE_DEVICE pages so far only in migration code > when it comes to the RMAP. > > The impact is rather small: we'd be calling pte_young() on a > non-present PTE, which is not really defined to have semantic. > > Note that we could currently only run into this case with > device-exclusive entries on THPs. We still adjust the mapcount on > conversion to device-exclusive; this makes the rmap walk > abort early for small folios, because we'll always have > !folio_mapped() with a single device-exclusive entry. We'll adjust the > mapcount logic once all page_vma_mapped_walk() users can properly > handle device-exclusive entries. > > Fixes: b756a3b5e7ea ("mm: device exclusive memory access") > Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: SeongJae Park <sj@kernel.org> Thanks, SJ [...]
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 0f9ae14f884dd..10d75f9ceeafb 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -92,12 +92,20 @@ static bool damon_folio_young_one(struct folio *folio, { bool *accessed = arg; DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0); + pte_t pte; *accessed = false; while (page_vma_mapped_walk(&pvmw)) { addr = pvmw.address; if (pvmw.pte) { - *accessed = pte_young(ptep_get(pvmw.pte)) || + pte = ptep_get(pvmw.pte); + + /* + * PFN swap PTEs, such as device-exclusive ones, that + * actually map pages are "old" from a CPU perspective. + * The MMU notifier takes care of any device aspects. + */ + *accessed = (pte_present(pte) && pte_young(pte)) || !folio_test_idle(folio) || mmu_notifier_test_young(vma->vm_mm, addr); } else {
Ever since commit b756a3b5e7ea ("mm: device exclusive memory access") we can return with a device-exclusive entry from page_vma_mapped_walk(). damon_folio_young_one() is not prepared for that, so teach it about these PFN swap PTEs. Note that device-private entries are so far not applicable on that path, as we expect ZONE_DEVICE pages so far only in migration code when it comes to the RMAP. The impact is rather small: we'd be calling pte_young() on a non-present PTE, which is not really defined to have semantic. Note that we could currently only run into this case with device-exclusive entries on THPs. We still adjust the mapcount on conversion to device-exclusive; this makes the rmap walk abort early for small folios, because we'll always have !folio_mapped() with a single device-exclusive entry. We'll adjust the mapcount logic once all page_vma_mapped_walk() users can properly handle device-exclusive entries. Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Signed-off-by: David Hildenbrand <david@redhat.com> --- mm/damon/paddr.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)