Message ID | 1598273705-69124-28-git-send-email-alex.shi@linux.alibaba.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | per memcg lru_lock | expand |
The patch need update since a bug found. From 547d95205e666c7c5a81c44b7b1f8e1b6c7b1749 Mon Sep 17 00:00:00 2001 From: Alex Shi <alex.shi@linux.alibaba.com> Date: Sat, 1 Aug 2020 22:49:31 +0800 Subject: [PATCH] mm/swap.c: optimizing __pagevec_lru_add lru_lock The current relock logical will change lru_lock when if found a new lruvec, so if 2 memcgs are reading file or alloc page equally, they could hold the lru_lock alternately. This patch will record the needed lru_lock and only hold them once in above scenario. That could reduce the lock contention. Suggested-by: Konstantin Khlebnikov <koct9i@gmail.com> Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/swap.c | 42 +++++++++++++++++++++++++++++++++++------- 1 file changed, 35 insertions(+), 7 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 2ac78e8fab71..dba3f0aba2a0 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -958,24 +958,52 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) trace_mm_lru_insertion(page, lru); } +struct add_lruvecs { + struct list_head lists[PAGEVEC_SIZE]; + struct lruvec *vecs[PAGEVEC_SIZE]; +}; + /* * Add the passed pages to the LRU, then drop the caller's refcount * on them. Reinitialises the caller's pagevec. */ void __pagevec_lru_add(struct pagevec *pvec) { - int i; + int i, j, total; struct lruvec *lruvec = NULL; unsigned long flags = 0; + struct page *page; + struct add_lruvecs lruvecs; + + for (i = total = 0; i < pagevec_count(pvec); i++) { + page = pvec->pages[i]; + lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); + lruvecs.vecs[i] = NULL; + + /* Try to find a same lruvec */ + for (j = 0; j < total; j++) + if (lruvec == lruvecs.vecs[j]) + break; + /* A new lruvec */ + if (j == total) { + INIT_LIST_HEAD(&lruvecs.lists[total]); + lruvecs.vecs[total] = lruvec; + total++; + } - for (i = 0; i < pagevec_count(pvec); i++) { - struct page *page = pvec->pages[i]; + list_add(&page->lru, &lruvecs.lists[j]); + } - lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); - __pagevec_lru_add_fn(page, lruvec); + for (i = 0; i < total; i++) { + spin_lock_irqsave(&lruvecs.vecs[i]->lru_lock, flags); + while (!list_empty(&lruvecs.lists[i])) { + page = lru_to_page(&lruvecs.lists[i]); + list_del(&page->lru); + __pagevec_lru_add_fn(page, lruvecs.vecs[i]); + } + spin_unlock_irqrestore(&lruvecs.vecs[i]->lru_lock, flags); } - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); }
diff --git a/mm/swap.c b/mm/swap.c index 2ac78e8fab71..fe53449fa1b8 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -958,24 +958,53 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) trace_mm_lru_insertion(page, lru); } +struct add_lruvecs { + struct list_head lists[PAGEVEC_SIZE]; + struct lruvec *vecs[PAGEVEC_SIZE]; +}; + /* * Add the passed pages to the LRU, then drop the caller's refcount * on them. Reinitialises the caller's pagevec. */ void __pagevec_lru_add(struct pagevec *pvec) { - int i; + int i, j, total; struct lruvec *lruvec = NULL; unsigned long flags = 0; + struct page *page; + struct add_lruvecs lruvecs; + + lruvecs.vecs[0] = NULL; + for (i = total = 0; i < pagevec_count(pvec); i++) { + page = pvec->pages[i]; + lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); + + /* Try to find a same lruvec */ + for (j = 0; j <= total; j++) + if (lruvec == lruvecs.vecs[j]) + break; + /* A new lruvec */ + if (j > total) { + INIT_LIST_HEAD(&lruvecs.lists[total]); + lruvecs.vecs[total] = lruvec; + j = total++; + lruvecs.vecs[total] = 0; + } - for (i = 0; i < pagevec_count(pvec); i++) { - struct page *page = pvec->pages[i]; + list_add(&page->lru, &lruvecs.lists[j]); + } - lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); - __pagevec_lru_add_fn(page, lruvec); + for (i = 0; i < total; i++) { + spin_lock_irqsave(&lruvecs.vecs[i]->lru_lock, flags); + while (!list_empty(&lruvecs.lists[i])) { + page = lru_to_page(&lruvecs.lists[i]); + list_del(&page->lru); + __pagevec_lru_add_fn(page, lruvecs.vecs[i]); + } + spin_unlock_irqrestore(&lruvecs.vecs[i]->lru_lock, flags); } - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); }
The current relock logical will change lru_lock when if found a new lruvec, so if 2 memcgs are reading file or alloc page equally, they could hold the lru_lock alternately. This patch will record the needed lru_lock and only hold them once in above scenario. That could reduce the lock contention. Suggested-by: Konstantin Khlebnikov <koct9i@gmail.com> Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/swap.c | 43 ++++++++++++++++++++++++++++++++++++------- 1 file changed, 36 insertions(+), 7 deletions(-)