Message ID | 20180823084709.19717-2-npiggin@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | minor mmu_gather patches | expand |
On Thu, Aug 23, 2018 at 06:47:08PM +1000, Nicholas Piggin wrote: > There is no need to call this from tlb_flush_mmu_tlbonly, it > logically belongs with tlb_flush_mmu_free. This allows some > code consolidation with a subsequent fix. > > Signed-off-by: Nicholas Piggin <npiggin@gmail.com> > --- > mm/memory.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) Looks good to me, thanks: Acked-by: Will Deacon <will.deacon@arm.com> Will > diff --git a/mm/memory.c b/mm/memory.c > index 19f47d7b9b86..7c58310734eb 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -245,9 +245,6 @@ static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) > > tlb_flush(tlb); > mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end); > -#ifdef CONFIG_HAVE_RCU_TABLE_FREE > - tlb_table_flush(tlb); > -#endif > __tlb_reset_range(tlb); > } > > @@ -255,6 +252,9 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb) > { > struct mmu_gather_batch *batch; > > +#ifdef CONFIG_HAVE_RCU_TABLE_FREE > + tlb_table_flush(tlb); > +#endif > for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { > free_pages_and_swap_cache(batch->pages, batch->nr); > batch->nr = 0; > -- > 2.17.0 >
On Thu, 2018-08-23 at 18:47 +1000, Nicholas Piggin wrote: > There is no need to call this from tlb_flush_mmu_tlbonly, it > logically belongs with tlb_flush_mmu_free. This allows some > code consolidation with a subsequent fix. > > Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Rik van Riel <riel@surriel.com> This patch also fixes an infinite recursion bug with CONFIG_HAVE_RCU_TABLE_FREE enabled, which has this call trace: tlb_table_flush -> tlb_table_invalidate -> tlb_flush_mmu_tlbonly -> tlb_table_flush -> ... (infinite recursion) This should probably be applied sooner rather than later.
On Thu, Sep 06, 2018 at 04:29:59PM -0400, Rik van Riel wrote: > On Thu, 2018-08-23 at 18:47 +1000, Nicholas Piggin wrote: > > There is no need to call this from tlb_flush_mmu_tlbonly, it > > logically belongs with tlb_flush_mmu_free. This allows some > > code consolidation with a subsequent fix. > > > > Signed-off-by: Nicholas Piggin <npiggin@gmail.com> > > Reviewed-by: Rik van Riel <riel@surriel.com> > > This patch also fixes an infinite recursion bug > with CONFIG_HAVE_RCU_TABLE_FREE enabled, which > has this call trace: > > tlb_table_flush > -> tlb_table_invalidate > -> tlb_flush_mmu_tlbonly > -> tlb_table_flush > -> ... (infinite recursion) > > This should probably be applied sooner rather than > later. It's already in mainline with a cc stable afaict. Will
On Fri, 2018-09-07 at 14:44 +0100, Will Deacon wrote: > On Thu, Sep 06, 2018 at 04:29:59PM -0400, Rik van Riel wrote: > > On Thu, 2018-08-23 at 18:47 +1000, Nicholas Piggin wrote: > > > There is no need to call this from tlb_flush_mmu_tlbonly, it > > > logically belongs with tlb_flush_mmu_free. This allows some > > > code consolidation with a subsequent fix. > > > > > > Signed-off-by: Nicholas Piggin <npiggin@gmail.com> > > > > Reviewed-by: Rik van Riel <riel@surriel.com> > > > > This patch also fixes an infinite recursion bug > > with CONFIG_HAVE_RCU_TABLE_FREE enabled, which > > has this call trace: > > > > tlb_table_flush > > -> tlb_table_invalidate > > -> tlb_flush_mmu_tlbonly > > -> tlb_table_flush > > -> ... (infinite recursion) > > > > This should probably be applied sooner rather than > > later. > > It's already in mainline with a cc stable afaict. Sure enough, it is. I guess I have too many kernel trees on this system, and was looking at the wrong one somehow.
diff --git a/mm/memory.c b/mm/memory.c index 19f47d7b9b86..7c58310734eb 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -245,9 +245,6 @@ static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) tlb_flush(tlb); mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end); -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - tlb_table_flush(tlb); -#endif __tlb_reset_range(tlb); } @@ -255,6 +252,9 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb) { struct mmu_gather_batch *batch; +#ifdef CONFIG_HAVE_RCU_TABLE_FREE + tlb_table_flush(tlb); +#endif for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { free_pages_and_swap_cache(batch->pages, batch->nr); batch->nr = 0;
There is no need to call this from tlb_flush_mmu_tlbonly, it logically belongs with tlb_flush_mmu_free. This allows some code consolidation with a subsequent fix. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> --- mm/memory.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)