Message ID | 20240926013506.860253-1-jthoughton@google.com (mailing list archive) |
---|---|
Headers | show |
Series | mm: multi-gen LRU: Walk secondary MMU page tables while aging | expand |
On Thu, Sep 26, 2024, James Houghton wrote: > This patchset makes it possible for MGLRU to consult secondary MMUs > while doing aging, not just during eviction. This allows for more > accurate reclaim decisions, which is especially important for proactive > reclaim. ... > James Houghton (14): > KVM: Remove kvm_handle_hva_range helper functions > KVM: Add lockless memslot walk to KVM > KVM: x86/mmu: Factor out spte atomic bit clearing routine > KVM: x86/mmu: Relax locking for kvm_test_age_gfn and kvm_age_gfn > KVM: x86/mmu: Rearrange kvm_{test_,}age_gfn > KVM: x86/mmu: Only check gfn age in shadow MMU if > indirect_shadow_pages > 0 > mm: Add missing mmu_notifier_clear_young for !MMU_NOTIFIER > mm: Add has_fast_aging to struct mmu_notifier > mm: Add fast_only bool to test_young and clear_young MMU notifiers Per offline discussions, there's a non-zero chance that fast_only won't be needed, because it may be preferable to incorporate secondary MMUs into MGLRU, even if they don't support "fast" aging. What's the status on that front? Even if the status is "TBD", it'd be very helpful to let others know, so that they don't spend time reviewing code that might be completely thrown away. > KVM: Pass fast_only to kvm_{test_,}age_gfn > KVM: x86/mmu: Locklessly harvest access information from shadow MMU > KVM: x86/mmu: Enable has_fast_aging > mm: multi-gen LRU: Have secondary MMUs participate in aging > KVM: selftests: Add multi-gen LRU aging to access_tracking_perf_test > > Sean Christopherson (4): > KVM: x86/mmu: Refactor low level rmap helpers to prep for walking w/o > mmu_lock > KVM: x86/mmu: Add infrastructure to allow walking rmaps outside of > mmu_lock > KVM: x86/mmu: Add support for lockless walks of rmap SPTEs > KVM: x86/mmu: Support rmap walks without holding mmu_lock when aging > gfns
On Mon, Oct 14, 2024 at 4:22 PM Sean Christopherson <seanjc@google.com> wrote: > > On Thu, Sep 26, 2024, James Houghton wrote: > > This patchset makes it possible for MGLRU to consult secondary MMUs > > while doing aging, not just during eviction. This allows for more > > accurate reclaim decisions, which is especially important for proactive > > reclaim. > > ... > > > James Houghton (14): > > KVM: Remove kvm_handle_hva_range helper functions > > KVM: Add lockless memslot walk to KVM > > KVM: x86/mmu: Factor out spte atomic bit clearing routine > > KVM: x86/mmu: Relax locking for kvm_test_age_gfn and kvm_age_gfn > > KVM: x86/mmu: Rearrange kvm_{test_,}age_gfn > > KVM: x86/mmu: Only check gfn age in shadow MMU if > > indirect_shadow_pages > 0 > > mm: Add missing mmu_notifier_clear_young for !MMU_NOTIFIER > > mm: Add has_fast_aging to struct mmu_notifier > > mm: Add fast_only bool to test_young and clear_young MMU notifiers > > Per offline discussions, there's a non-zero chance that fast_only won't be needed, > because it may be preferable to incorporate secondary MMUs into MGLRU, even if > they don't support "fast" aging. > > What's the status on that front? Even if the status is "TBD", it'd be very helpful > to let others know, so that they don't spend time reviewing code that might be > completely thrown away. The fast_only MMU notifier changes will probably be removed in v8. ChromeOS folks found that the way MGLRU *currently* interacts with KVM is problematic. That is, today, with the MM_WALK MGLRU capability enabled, normal PTEs have their Accessed bits cleared via a page table scan and then during an rmap walk upon attempted eviction, whereas, KVM SPTEs only have their Accessed bits cleared via the rmap walk at eviction time. So KVM SPTEs have their Accessed bits cleared less frequently than normal PTEs, and therefore they appear younger than they should. It turns out that this causes tab open latency regressions on ChromeOS where a significant amount of memory is being used by a VM. IIUC, the fix for this is to have MGLRU age SPTEs as often as it ages normal PTEs; i.e., it should call the correct MMU notifiers each time it clears A bits on PTEs. The final patch in this series sort of does this, but instead of calling the new fast_only notifier, we need to call the normal test/clear_young() notifiers regardless of how fast they are. This also means that the MGLRU changes no longer depend on the KVM optimizations, as they can motivated independently. Yu, have I gotten anything wrong here? Do you have any more details to share?
On Mon, Oct 14, 2024 at 6:07 PM James Houghton <jthoughton@google.com> wrote: > > On Mon, Oct 14, 2024 at 4:22 PM Sean Christopherson <seanjc@google.com> wrote: > > > > On Thu, Sep 26, 2024, James Houghton wrote: > > > This patchset makes it possible for MGLRU to consult secondary MMUs > > > while doing aging, not just during eviction. This allows for more > > > accurate reclaim decisions, which is especially important for proactive > > > reclaim. > > > > ... > > > > > James Houghton (14): > > > KVM: Remove kvm_handle_hva_range helper functions > > > KVM: Add lockless memslot walk to KVM > > > KVM: x86/mmu: Factor out spte atomic bit clearing routine > > > KVM: x86/mmu: Relax locking for kvm_test_age_gfn and kvm_age_gfn > > > KVM: x86/mmu: Rearrange kvm_{test_,}age_gfn > > > KVM: x86/mmu: Only check gfn age in shadow MMU if > > > indirect_shadow_pages > 0 > > > mm: Add missing mmu_notifier_clear_young for !MMU_NOTIFIER > > > mm: Add has_fast_aging to struct mmu_notifier > > > mm: Add fast_only bool to test_young and clear_young MMU notifiers > > > > Per offline discussions, there's a non-zero chance that fast_only won't be needed, > > because it may be preferable to incorporate secondary MMUs into MGLRU, even if > > they don't support "fast" aging. > > > > What's the status on that front? Even if the status is "TBD", it'd be very helpful > > to let others know, so that they don't spend time reviewing code that might be > > completely thrown away. > > The fast_only MMU notifier changes will probably be removed in v8. > > ChromeOS folks found that the way MGLRU *currently* interacts with KVM > is problematic. That is, today, with the MM_WALK MGLRU capability > enabled, normal PTEs have their Accessed bits cleared via a page table > scan and then during an rmap walk upon attempted eviction, whereas, > KVM SPTEs only have their Accessed bits cleared via the rmap walk at > eviction time. So KVM SPTEs have their Accessed bits cleared less > frequently than normal PTEs, and therefore they appear younger than > they should. > > It turns out that this causes tab open latency regressions on ChromeOS > where a significant amount of memory is being used by a VM. IIUC, the > fix for this is to have MGLRU age SPTEs as often as it ages normal > PTEs; i.e., it should call the correct MMU notifiers each time it > clears A bits on PTEs. The final patch in this series sort of does > this, but instead of calling the new fast_only notifier, we need to > call the normal test/clear_young() notifiers regardless of how fast > they are. > > This also means that the MGLRU changes no longer depend on the KVM > optimizations, as they can motivated independently. > > Yu, have I gotten anything wrong here? Do you have any more details to share? Yes, that's precisely the problem. My original justification [1] for not scanning KVM MMU when lockless is not supported turned out to be harmful to some workloads too. On one hand, scanning KVM MMU when not lockless can cause the KVM MMU lock contention; on the other hand, not scanning KVM MMU can skew anon/file LRU aging and thrash page cache. Given the lock contention is being tackled, the latter seems to be the lesser of two evils. [1] https://lore.kernel.org/linux-mm/CAOUHufYFHKLwt1PWp2uS6g174GZYRZURWJAmdUWs5eaKmhEeyQ@mail.gmail.com/