Message ID | Y3SWCu6NRaMQ5dbD@li-4a3a4a4c-28e5-11b2-a85c-a8d192c6f089.ibm.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: mmu_gather: do not expose delayed_rmap flag | expand |
On Tue, Nov 15, 2022 at 11:49 PM Alexander Gordeev <agordeev@linux.ibm.com> wrote: > > Flag delayed_rmap of 'struct mmu_gather' is rather > a private member, but it is still accessed directly. > Instead, let the TLB gather code access the flag. Now, I set it up so that if you don't use delayed_rmap, the tlb_flush_rmaps() function ends up being an empty inline function, and as such the compiler should already have done this for you - including optimizing out the test that then doesn't even matter. But this patch shouldn't *matter*, but it also isn't wrong, so.. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Thanks, Linus
diff --git a/mm/memory.c b/mm/memory.c index 42f10cc1de58..38b58cd07b52 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1465,8 +1465,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, /* Do the actual TLB flush before dropping ptl */ if (force_flush) { tlb_flush_mmu_tlbonly(tlb); - if (tlb->delayed_rmap) - tlb_flush_rmaps(tlb, vma); + tlb_flush_rmaps(tlb, vma); } pte_unmap_unlock(start_pte, ptl); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 79de59136cd2..9f22309affee 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -60,6 +60,9 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) { struct mmu_gather_batch *batch; + if (!tlb->delayed_rmap) + return; + batch = tlb->active; for (int i = 0; i < batch->nr; i++) { struct encoded_page *enc = batch->encoded_pages[i];
Flag delayed_rmap of 'struct mmu_gather' is rather a private member, but it is still accessed directly. Instead, let the TLB gather code access the flag. Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> --- mm/memory.c | 3 +-- mm/mmu_gather.c | 3 +++ 2 files changed, 4 insertions(+), 2 deletions(-)