Show patches with: Series = LUF(Lazy Unmap Flush) reducing tlb numbers over 90%       |    State = Action Required       |   26 patches
Patch Series A/R/T S/W/F Date Submitter Delegate State
[RFC,v12,26/26] mm/luf: implement luf debug feature LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,25/26] mm/vmscan: apply luf mechanism to unmapping during folio reclaim LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,24/26] mm/migrate: apply luf mechanism to unmapping during migration LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,23/26] mm: separate move/undo parts from migrate_pages_batch() LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,22/26] mm/page_alloc: not allow to tlb shootdown if !preemptable() && non_luf_pages_ok() LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,21/26] mm: perform luf tlb shootdown per zone in batched manner LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,20/26] mm, fs: skip tlb flushes for luf'd filemap that already has been done LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,19/26] mm: skip luf tlb flush for luf'd mm that already has been done LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,18/26] mm/page_alloc: retry 3 times to take pcp pages on luf check failure LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,17/26] x86/tlb, riscv/tlb, arm64/tlbflush, mm: remove cpus from tlb shootdown that already… LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,16/26] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,15/26] fs, filemap: refactor to gather the scattered ->write_{begin,end}() calls LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,14/26] mm/rmap: recognize read-only tlb entries during batched tlb flush LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,13/26] mm: introduce pend_list in struct free_area to track luf'd pages LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,12/26] mm: delimit critical sections to take off pages from pcp or buddy alloctor LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,11/26] mm: deliver luf_key to pcp or buddy on free after unmapping LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,10/26] mm: introduce APIs to check if the page allocation is tlb shootdownable LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,09/26] mm: introduce API to perform tlb shootdown on exit from page allocator LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,08/26] mm: introduce luf_batch to be used as hash table to store luf meta data LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,07/26] mm: introduce luf_ugen to be used as a global timestamp LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,06/26] mm: move should_skip_kasan_poison() to mm/internal.h LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,05/26] mm/buddy: make room for a new variable, luf_key, in struct page LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,04/26] x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_fl… LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,03/26] riscv/tlb: add APIs manipulating tlb batch's arch data LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,02/26] arm64/tlbflush: add APIs manipulating tlb batch's arch data LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New
[RFC,v12,01/26] x86/tlb: add APIs manipulating tlb batch's arch data LUF(Lazy Unmap Flush) reducing tlb numbers over 90% - - - --- 2025-02-20 Byungchul Park New