From patchwork Thu Jan 16 03:05:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11335931 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C09D92A for ; Thu, 16 Jan 2020 03:05:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 48B5D24679 for ; Thu, 16 Jan 2020 03:05:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 48B5D24679 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0ED548E0028; Wed, 15 Jan 2020 22:05:27 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 071B58E0026; Wed, 15 Jan 2020 22:05:27 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA7C88E0028; Wed, 15 Jan 2020 22:05:26 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id CCA618E0026 for ; Wed, 15 Jan 2020 22:05:26 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 9953D8248047 for ; Thu, 16 Jan 2020 03:05:26 +0000 (UTC) X-FDA: 76382006652.08.metal40_86b93d0fd6121 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,alex.shi@linux.alibaba.com,:cgroups@vger.kernel.org:linux-kernel@vger.kernel.org::akpm@linux-foundation.org:mgorman@techsingularity.net:tj@kernel.org:hughd@google.com:khlebnikov@yandex-team.ru:daniel.m.jordan@oracle.com:yang.shi@linux.alibaba.com:willy@infradead.org:shakeelb@google.com:hannes@cmpxchg.org:yun.wang@linux.alibaba.com,RULES_HIT:30054:30070,0,RBL:47.88.44.36:@linux.alibaba.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10;47.88.44.36-irl.urbl.hostedemail.com-127.0.0.175,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: metal40_86b93d0fd6121 X-Filterd-Recvd-Size: 4290 Received: from out4436.biz.mail.alibaba.com (out4436.biz.mail.alibaba.com [47.88.44.36]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 03:05:25 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04446;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0TnrJfUT_1579143911; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TnrJfUT_1579143911) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Jan 2020 11:05:11 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, hannes@cmpxchg.org Cc: yun.wang@linux.alibaba.com Subject: [PATCH v8 01/10] mm/vmscan: remove unnecessary lruvec adding Date: Thu, 16 Jan 2020 11:05:00 +0800 Message-Id: <1579143909-156105-2-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> References: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We don't have to add a freeable page into lru and then remove from it. This change saves a couple of actions and makes the moving more clear. The SetPageLRU needs to be kept here for list intergrity. Otherwise: #0 mave_pages_to_lru #1 release_pages if (put_page_testzero()) if !put_page_testzero !PageLRU //skip lru_lock list_add(&page->lru,); list_add(&page->lru,) //corrupt Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Johannes Weiner Cc: Hugh Dickins Cc: yun.wang@linux.alibaba.com Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/vmscan.c | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 572fb17c6273..a270d32bdb94 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1852,26 +1852,29 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, while (!list_empty(list)) { page = lru_to_page(list); VM_BUG_ON_PAGE(PageLRU(page), page); + list_del(&page->lru); if (unlikely(!page_evictable(page))) { - list_del(&page->lru); spin_unlock_irq(&pgdat->lru_lock); putback_lru_page(page); spin_lock_irq(&pgdat->lru_lock); continue; } - lruvec = mem_cgroup_page_lruvec(page, pgdat); + /* + * The SetPageLRU needs to be kept here for list intergrity. + * Otherwise: + * #0 mave_pages_to_lru #1 release_pages + * if (put_page_testzero()) + * if !put_page_testzero + * !PageLRU //skip lru_lock + * list_add(&page->lru,); + * list_add(&page->lru,) //corrupt + */ SetPageLRU(page); - lru = page_lru(page); - - nr_pages = hpage_nr_pages(page); - update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); - list_move(&page->lru, &lruvec->lists[lru]); - if (put_page_testzero(page)) { + if (unlikely(put_page_testzero(page))) { __ClearPageLRU(page); __ClearPageActive(page); - del_page_from_lru_list(page, lruvec, lru); if (unlikely(PageCompound(page))) { spin_unlock_irq(&pgdat->lru_lock); @@ -1879,9 +1882,16 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, spin_lock_irq(&pgdat->lru_lock); } else list_add(&page->lru, &pages_to_free); - } else { - nr_moved += nr_pages; + continue; } + + lruvec = mem_cgroup_page_lruvec(page, pgdat); + lru = page_lru(page); + nr_pages = hpage_nr_pages(page); + + update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); + list_add(&page->lru, &lruvec->lists[lru]); + nr_moved += nr_pages; } /* From patchwork Thu Jan 16 03:05:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11335915 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0A7F1138D for ; Thu, 16 Jan 2020 03:05:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C8DAF214AF for ; Thu, 16 Jan 2020 03:05:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C8DAF214AF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 779098E0020; Wed, 15 Jan 2020 22:05:19 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 704968E001C; Wed, 15 Jan 2020 22:05:19 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57FD48E0020; Wed, 15 Jan 2020 22:05:19 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0133.hostedemail.com [216.40.44.133]) by kanga.kvack.org (Postfix) with ESMTP id 352C88E001C for ; Wed, 15 Jan 2020 22:05:19 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id EA9938248047 for ; Thu, 16 Jan 2020 03:05:18 +0000 (UTC) X-FDA: 76382006316.16.plot89_8597f6de2574b X-Spam-Summary: 2,0,0,67e8c3799237ace9,d41d8cd98f00b204,alex.shi@linux.alibaba.com,:cgroups@vger.kernel.org:linux-kernel@vger.kernel.org::akpm@linux-foundation.org:mgorman@techsingularity.net:tj@kernel.org:hughd@google.com:khlebnikov@yandex-team.ru:daniel.m.jordan@oracle.com:yang.shi@linux.alibaba.com:willy@infradead.org:shakeelb@google.com:hannes@cmpxchg.org:mhocko@kernel.org:vdavydov.dev@gmail.com,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1261:1345:1359:1431:1437:1534:1543:1711:1730:1747:1777:1792:2393:2553:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3868:3870:3872:5007:6261:6737:7514:7875:8957:9207:9592:10004:11026:11232:11473:11658:11914:12043:12048:12291:12296:12297:12438:12555:12683:12895:13846:14096:14181:14394:14721:14915:21060:21080:21324:21451:21627:30054:30070:30090,0,RBL:115.124.30.56:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none ,Custom_ X-HE-Tag: plot89_8597f6de2574b X-Filterd-Recvd-Size: 4692 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 03:05:17 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R861e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04452;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0TnrFiLi_1579143911; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TnrFiLi_1579143911) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Jan 2020 11:05:12 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, hannes@cmpxchg.org Cc: Michal Hocko , Vladimir Davydov Subject: [PATCH v8 02/10] mm/memcg: fold lock_page_lru into commit_charge Date: Thu, 16 Jan 2020 11:05:01 +0800 Message-Id: <1579143909-156105-3-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> References: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As Konstantin Khlebnikov mentioned: Also I don't like these functions: - called lock/unlock but actually also isolates - used just once - pgdat evaluated twice Cleanup and fold these functions into commit_charge. It also reduces lock time while lrucare && !PageLRU. Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Michal Hocko Cc: Konstantin Khlebnikov Cc: Vladimir Davydov Cc: Andrew Morton Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/memcontrol.c | 57 ++++++++++++++++++++------------------------------------- 1 file changed, 20 insertions(+), 37 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c5b5f74cfd4d..d92538a9185c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2570,41 +2570,11 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) css_put_many(&memcg->css, nr_pages); } -static void lock_page_lru(struct page *page, int *isolated) -{ - pg_data_t *pgdat = page_pgdat(page); - - spin_lock_irq(&pgdat->lru_lock); - if (PageLRU(page)) { - struct lruvec *lruvec; - - lruvec = mem_cgroup_page_lruvec(page, pgdat); - ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, page_lru(page)); - *isolated = 1; - } else - *isolated = 0; -} - -static void unlock_page_lru(struct page *page, int isolated) -{ - pg_data_t *pgdat = page_pgdat(page); - - if (isolated) { - struct lruvec *lruvec; - - lruvec = mem_cgroup_page_lruvec(page, pgdat); - VM_BUG_ON_PAGE(PageLRU(page), page); - SetPageLRU(page); - add_page_to_lru_list(page, lruvec, page_lru(page)); - } - spin_unlock_irq(&pgdat->lru_lock); -} - static void commit_charge(struct page *page, struct mem_cgroup *memcg, bool lrucare) { - int isolated; + struct lruvec *lruvec = NULL; + pg_data_t *pgdat; VM_BUG_ON_PAGE(page->mem_cgroup, page); @@ -2612,9 +2582,17 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg, * In some cases, SwapCache and FUSE(splice_buf->radixtree), the page * may already be on some other mem_cgroup's LRU. Take care of it. */ - if (lrucare) - lock_page_lru(page, &isolated); - + if (lrucare) { + pgdat = page_pgdat(page); + spin_lock_irq(&pgdat->lru_lock); + + if (PageLRU(page)) { + lruvec = mem_cgroup_page_lruvec(page, pgdat); + ClearPageLRU(page); + del_page_from_lru_list(page, lruvec, page_lru(page)); + } else + spin_unlock_irq(&pgdat->lru_lock); + } /* * Nobody should be changing or seriously looking at * page->mem_cgroup at this point: @@ -2631,8 +2609,13 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg, */ page->mem_cgroup = memcg; - if (lrucare) - unlock_page_lru(page, isolated); + if (lrucare && lruvec) { + lruvec = mem_cgroup_page_lruvec(page, pgdat); + VM_BUG_ON_PAGE(PageLRU(page), page); + SetPageLRU(page); + add_page_to_lru_list(page, lruvec, page_lru(page)); + spin_unlock_irq(&pgdat->lru_lock); + } } #ifdef CONFIG_MEMCG_KMEM From patchwork Thu Jan 16 03:05:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11335919 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E56EB92A for ; Thu, 16 Jan 2020 03:05:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 84886214AF for ; Thu, 16 Jan 2020 03:05:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 84886214AF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 559448E0023; Wed, 15 Jan 2020 22:05:21 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 46BDD8E001C; Wed, 15 Jan 2020 22:05:21 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 001C18E001C; Wed, 15 Jan 2020 22:05:20 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D1DAF8E0022 for ; Wed, 15 Jan 2020 22:05:20 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 88D4A181AEF07 for ; Thu, 16 Jan 2020 03:05:20 +0000 (UTC) X-FDA: 76382006400.26.sun27_859e47eae1b19 X-Spam-Summary: 2,0,0,a901ae976c8a2c3b,d41d8cd98f00b204,alex.shi@linux.alibaba.com,:cgroups@vger.kernel.org:linux-kernel@vger.kernel.org::akpm@linux-foundation.org:mgorman@techsingularity.net:tj@kernel.org:hughd@google.com:khlebnikov@yandex-team.ru:daniel.m.jordan@oracle.com:yang.shi@linux.alibaba.com:willy@infradead.org:shakeelb@google.com:hannes@cmpxchg.org:mhocko@kernel.org:vdavydov.dev@gmail.com:guro@fb.com:chris@chrisdown.name:tglx@linutronix.de:vbabka@suse.cz:cai@lca.pw:aryabinin@virtuozzo.com:kirill.shutemov@linux.intel.com:jglisse@redhat.com:aarcange@redhat.com:rientjes@google.com:aneesh.kumar@linux.ibm.com:swkhack@gmail.com:stefan.potyra@elektrobit.com:rppt@linux.vnet.ibm.com:sfr@canb.auug.org.au:colin.king@canonical.com:jgg@ziepe.ca:mchehab+samsung@kernel.org:peng.fan@nxp.com:nborisov@suse.com:ira.weiny@intel.com:ktkhai@virtuozzo.com:laoar.shao@gmail.com,RULES_HIT:69:152:327:355:379:541:960:966:967:973:988:989:1260:1261:1277:1311:1313:1314:1345:1359:1431:1437:1515:1 516:1518 X-HE-Tag: sun27_859e47eae1b19 X-Filterd-Recvd-Size: 34788 Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 03:05:17 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07484;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=38;SR=0;TI=SMTPD_---0TnrHTrz_1579143912; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TnrHTrz_1579143912) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Jan 2020 11:05:12 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, hannes@cmpxchg.org Cc: Michal Hocko , Vladimir Davydov , Roman Gushchin , Chris Down , Thomas Gleixner , Vlastimil Babka , Qian Cai , Andrey Ryabinin , "Kirill A. Shutemov" , =?utf-8?b?SsOpcsO0?= =?utf-8?b?bWUgR2xpc3Nl?= , Andrea Arcangeli , David Rientjes , "Aneesh Kumar K.V" , swkhack , "Potyra, Stefan" , Mike Rapoport , Stephen Rothwell , Colin Ian King , Jason Gunthorpe , Mauro Carvalho Chehab , Peng Fan , Nikolay Borisov , Ira Weiny , Kirill Tkhai , Yafang Shao Subject: [PATCH v8 03/10] mm/lru: replace pgdat lru_lock with lruvec lock Date: Thu, 16 Jan 2020 11:05:02 +0800 Message-Id: <1579143909-156105-4-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> References: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patchset move lru_lock into lruvec, give a lru_lock for each of lruvec, thus bring a lru_lock for each of memcg per node. So on a large node machine, each of memcg don't need suffer from per node pgdat->lru_lock waiting. They could go fast with their self lru_lock. This is the main patch to replace per node lru_lock with per memcg lruvec lock. and also fold lock_page_lru into commit_charge. We introduces function lock_page_lruvec, which will lock the page's memcg and then memcg's lruvec->lru_lock. (Thanks Johannes Weiner, Hugh Dickins and Konstantin Khlebnikov suggestion/reminder on them) According to Daniel Jordan's suggestion, I run 208 'dd' with on 104 containers on a 2s * 26cores * HT box with a modefied case: https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice With this and later patches, the readtwice performance increases about 80% with containers. Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Andrew Morton Cc: Roman Gushchin Cc: Shakeel Butt Cc: Chris Down Cc: Thomas Gleixner Cc: Mel Gorman Cc: Vlastimil Babka Cc: Qian Cai Cc: Andrey Ryabinin Cc: "Kirill A. Shutemov" Cc: "Jérôme Glisse" Cc: Andrea Arcangeli Cc: Yang Shi Cc: David Rientjes Cc: "Aneesh Kumar K.V" Cc: swkhack Cc: "Potyra, Stefan" Cc: Mike Rapoport Cc: Stephen Rothwell Cc: Colin Ian King Cc: Jason Gunthorpe Cc: Mauro Carvalho Chehab Cc: Matthew Wilcox Cc: Peng Fan Cc: Nikolay Borisov Cc: Ira Weiny Cc: Kirill Tkhai Cc: Yafang Shao Cc: Konstantin Khlebnikov Cc: Hugh Dickins Cc: Tejun Heo Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: cgroups@vger.kernel.org --- include/linux/memcontrol.h | 27 ++++++++++++++++ include/linux/mmzone.h | 2 ++ mm/compaction.c | 55 ++++++++++++++++++++----------- mm/huge_memory.c | 18 ++++------- mm/memcontrol.c | 61 +++++++++++++++++++++++++++------- mm/mlock.c | 32 +++++++++--------- mm/mmzone.c | 1 + mm/page_idle.c | 7 ++-- mm/swap.c | 75 +++++++++++++++++------------------------- mm/vmscan.c | 81 +++++++++++++++++++++++++--------------------- 10 files changed, 215 insertions(+), 144 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index a7a0a1a5c8d5..8389b9b927ef 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -417,6 +417,10 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, } struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *); +struct lruvec *lock_page_lruvec_irq(struct page *); +struct lruvec *lock_page_lruvec_irqsave(struct page *, unsigned long*); +void unlock_page_lruvec_irq(struct lruvec *); +void unlock_page_lruvec_irqrestore(struct lruvec *, unsigned long); struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); @@ -900,6 +904,29 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page, { return &pgdat->__lruvec; } +#define lock_page_lruvec_irq(page) \ +({ \ + struct pglist_data *pgdat = page_pgdat(page); \ + spin_lock_irq(&pgdat->__lruvec.lru_lock); \ + &pgdat->__lruvec; \ +}) + +#define lock_page_lruvec_irqsave(page, flagsp) \ +({ \ + struct pglist_data *pgdat = page_pgdat(page); \ + spin_lock_irqsave(&pgdat->__lruvec.lru_lock, *flagsp); \ + &pgdat->__lruvec; \ +}) + +#define unlock_page_lruvec_irq(lruvec) \ +({ \ + spin_unlock_irq(&lruvec->lru_lock); \ +}) + +#define unlock_page_lruvec_irqrestore(lruvec, flags) \ +({ \ + spin_unlock_irqrestore(&lruvec->lru_lock, flags); \ +}) static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg) { diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 89d8ff06c9ce..c5455675acf2 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -311,6 +311,8 @@ struct lruvec { unsigned long refaults; /* Various lruvec state flags (enum lruvec_flags) */ unsigned long flags; + /* per lruvec lru_lock for memcg */ + spinlock_t lru_lock; #ifdef CONFIG_MEMCG struct pglist_data *pgdat; #endif diff --git a/mm/compaction.c b/mm/compaction.c index 672d3c78c6ab..8c0a2da217d8 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -786,7 +786,7 @@ static bool too_many_isolated(pg_data_t *pgdat) unsigned long nr_scanned = 0, nr_isolated = 0; struct lruvec *lruvec; unsigned long flags = 0; - bool locked = false; + struct lruvec *locked_lruvec = NULL; struct page *page = NULL, *valid_page = NULL; unsigned long start_pfn = low_pfn; bool skip_on_failure = false; @@ -846,11 +846,20 @@ static bool too_many_isolated(pg_data_t *pgdat) * contention, to give chance to IRQs. Abort completely if * a fatal signal is pending. */ - if (!(low_pfn % SWAP_CLUSTER_MAX) - && compact_unlock_should_abort(&pgdat->lru_lock, - flags, &locked, cc)) { - low_pfn = 0; - goto fatal_pending; + if (!(low_pfn % SWAP_CLUSTER_MAX)) { + if (locked_lruvec) { + unlock_page_lruvec_irqrestore(locked_lruvec, flags); + locked_lruvec = NULL; + } + + if (fatal_signal_pending(current)) { + cc->contended = true; + + low_pfn = 0; + goto fatal_pending; + } + + cond_resched(); } if (!pfn_valid_within(low_pfn)) @@ -919,10 +928,9 @@ static bool too_many_isolated(pg_data_t *pgdat) */ if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { - if (locked) { - spin_unlock_irqrestore(&pgdat->lru_lock, - flags); - locked = false; + if (locked_lruvec) { + unlock_page_lruvec_irqrestore(locked_lruvec, flags); + locked_lruvec = NULL; } if (!isolate_movable_page(page, isolate_mode)) @@ -948,10 +956,20 @@ static bool too_many_isolated(pg_data_t *pgdat) if (!(cc->gfp_mask & __GFP_FS) && page_mapping(page)) goto isolate_fail; + lruvec = mem_cgroup_page_lruvec(page, pgdat); + /* If we already hold the lock, we can skip some rechecking */ - if (!locked) { - locked = compact_lock_irqsave(&pgdat->lru_lock, - &flags, cc); + if (lruvec != locked_lruvec) { + struct mem_cgroup *memcg = lock_page_memcg(page); + + if (locked_lruvec) { + unlock_page_lruvec_irqrestore(locked_lruvec, flags); + locked_lruvec = NULL; + } + /* reget lruvec with a locked memcg */ + lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page)); + compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); + locked_lruvec = lruvec; /* Try get exclusive access under lock */ if (!skip_updated) { @@ -975,7 +993,6 @@ static bool too_many_isolated(pg_data_t *pgdat) } } - lruvec = mem_cgroup_page_lruvec(page, pgdat); /* Try isolate the page */ if (__isolate_lru_page(page, isolate_mode) != 0) @@ -1016,9 +1033,9 @@ static bool too_many_isolated(pg_data_t *pgdat) * page anyway. */ if (nr_isolated) { - if (locked) { - spin_unlock_irqrestore(&pgdat->lru_lock, flags); - locked = false; + if (locked_lruvec) { + unlock_page_lruvec_irqrestore(locked_lruvec, flags); + locked_lruvec = NULL; } putback_movable_pages(&cc->migratepages); cc->nr_migratepages = 0; @@ -1043,8 +1060,8 @@ static bool too_many_isolated(pg_data_t *pgdat) low_pfn = end_pfn; isolate_abort: - if (locked) - spin_unlock_irqrestore(&pgdat->lru_lock, flags); + if (locked_lruvec) + unlock_page_lruvec_irqrestore(locked_lruvec, flags); /* * Updated the cached scanner pfn once the pageblock has been scanned diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 41a0fbddc96b..160c845290cf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2495,17 +2495,13 @@ static void __split_huge_page_tail(struct page *head, int tail, } static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned long flags) + struct lruvec *lruvec, pgoff_t end, unsigned long flags) { struct page *head = compound_head(page); - pg_data_t *pgdat = page_pgdat(head); - struct lruvec *lruvec; struct address_space *swap_cache = NULL; unsigned long offset = 0; int i; - lruvec = mem_cgroup_page_lruvec(head, pgdat); - /* complete memcg works before add pages to LRU */ mem_cgroup_split_huge_fixup(head); @@ -2554,7 +2550,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_unlock(&head->mapping->i_pages); } - spin_unlock_irqrestore(&pgdat->lru_lock, flags); + unlock_page_lruvec_irqrestore(lruvec, flags); remap_page(head); @@ -2693,13 +2689,13 @@ bool can_split_huge_page(struct page *page, int *pextra_pins) int split_huge_page_to_list(struct page *page, struct list_head *list) { struct page *head = compound_head(page); - struct pglist_data *pgdata = NODE_DATA(page_to_nid(head)); struct deferred_split *ds_queue = get_deferred_split_queue(page); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; + struct lruvec *lruvec; int count, mapcount, extra_pins, ret; bool mlocked; - unsigned long flags; + unsigned long uninitialized_var(flags); pgoff_t end; VM_BUG_ON_PAGE(is_huge_zero_page(page), page); @@ -2766,7 +2762,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) lru_add_drain(); /* prevent PageLRU to go away from under us, and freeze lru stats */ - spin_lock_irqsave(&pgdata->lru_lock, flags); + lruvec = lock_page_lruvec_irqsave(head, &flags); if (mapping) { XA_STATE(xas, &mapping->i_pages, page_index(head)); @@ -2797,7 +2793,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } spin_unlock(&ds_queue->split_queue_lock); - __split_huge_page(page, list, end, flags); + __split_huge_page(page, list, lruvec, end, flags); if (PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; @@ -2816,7 +2812,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) spin_unlock(&ds_queue->split_queue_lock); fail: if (mapping) xa_unlock(&mapping->i_pages); - spin_unlock_irqrestore(&pgdata->lru_lock, flags); + unlock_page_lruvec_irqrestore(lruvec, flags); remap_page(head); ret = -EBUSY; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d92538a9185c..00fef8ddbd08 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1217,7 +1217,7 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd goto out; } - memcg = page->mem_cgroup; + memcg = READ_ONCE(page->mem_cgroup); /* * Swapcache readahead pages are added to the LRU - and * possibly migrated - before they are charged. @@ -1238,6 +1238,42 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd return lruvec; } +struct lruvec *lock_page_lruvec_irq(struct page *page) +{ + struct lruvec *lruvec; + struct mem_cgroup *memcg; + + memcg = lock_page_memcg(page); + lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page)); + spin_lock_irq(&lruvec->lru_lock); + + return lruvec; +} + +struct lruvec *lock_page_lruvec_irqsave(struct page *page, unsigned long *flags) +{ + struct lruvec *lruvec; + struct mem_cgroup *memcg; + + memcg = lock_page_memcg(page); + lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page)); + spin_lock_irqsave(&lruvec->lru_lock, *flags); + + return lruvec; +} + +void unlock_page_lruvec_irq(struct lruvec *lruvec) +{ + spin_unlock_irq(&lruvec->lru_lock); + __unlock_page_memcg(lruvec_memcg(lruvec)); +} + +void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, unsigned long flags) +{ + spin_unlock_irqrestore(&lruvec->lru_lock, flags); + __unlock_page_memcg(lruvec_memcg(lruvec)); +} + /** * mem_cgroup_update_lru_size - account for adding or removing an lru page * @lruvec: mem_cgroup per zone lru vector @@ -2574,7 +2610,6 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg, bool lrucare) { struct lruvec *lruvec = NULL; - pg_data_t *pgdat; VM_BUG_ON_PAGE(page->mem_cgroup, page); @@ -2583,16 +2618,16 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg, * may already be on some other mem_cgroup's LRU. Take care of it. */ if (lrucare) { - pgdat = page_pgdat(page); - spin_lock_irq(&pgdat->lru_lock); - - if (PageLRU(page)) { - lruvec = mem_cgroup_page_lruvec(page, pgdat); + lruvec = lock_page_lruvec_irq(page); + if (likely(PageLRU(page))) { ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_lru(page)); - } else - spin_unlock_irq(&pgdat->lru_lock); + } else { + unlock_page_lruvec_irq(lruvec); + lruvec = NULL; + } } + /* * Nobody should be changing or seriously looking at * page->mem_cgroup at this point: @@ -2610,11 +2645,13 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg, page->mem_cgroup = memcg; if (lrucare && lruvec) { - lruvec = mem_cgroup_page_lruvec(page, pgdat); + unlock_page_lruvec_irq(lruvec); + lruvec = lock_page_lruvec_irq(page); + VM_BUG_ON_PAGE(PageLRU(page), page); SetPageLRU(page); add_page_to_lru_list(page, lruvec, page_lru(page)); - spin_unlock_irq(&pgdat->lru_lock); + unlock_page_lruvec_irq(lruvec); } } @@ -2911,7 +2948,7 @@ void __memcg_kmem_uncharge(struct page *page, int order) /* * Because tail pages are not marked as "used", set it. We're under - * pgdat->lru_lock and migration entries setup in all page mappings. + * lruvec->lru_lock and migration entries setup in all page mappings. */ void mem_cgroup_split_huge_fixup(struct page *head) { diff --git a/mm/mlock.c b/mm/mlock.c index a72c1eeded77..10d15f58b061 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -106,12 +106,10 @@ void mlock_vma_page(struct page *page) * Isolate a page from LRU with optional get_page() pin. * Assumes lru_lock already held and page already pinned. */ -static bool __munlock_isolate_lru_page(struct page *page, bool getpage) +static bool __munlock_isolate_lru_page(struct page *page, + struct lruvec *lruvec, bool getpage) { if (PageLRU(page)) { - struct lruvec *lruvec; - - lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); if (getpage) get_page(page); ClearPageLRU(page); @@ -182,7 +180,7 @@ static void __munlock_isolation_failed(struct page *page) unsigned int munlock_vma_page(struct page *page) { int nr_pages; - pg_data_t *pgdat = page_pgdat(page); + struct lruvec *lruvec; /* For try_to_munlock() and to serialize with page migration */ BUG_ON(!PageLocked(page)); @@ -194,7 +192,7 @@ unsigned int munlock_vma_page(struct page *page) * might otherwise copy PageMlocked to part of the tail pages before * we clear it in the head page. It also stabilizes hpage_nr_pages(). */ - spin_lock_irq(&pgdat->lru_lock); + lruvec = lock_page_lruvec_irq(page); if (!TestClearPageMlocked(page)) { /* Potentially, PTE-mapped THP: do not skip the rest PTEs */ @@ -205,15 +203,15 @@ unsigned int munlock_vma_page(struct page *page) nr_pages = hpage_nr_pages(page); __mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages); - if (__munlock_isolate_lru_page(page, true)) { - spin_unlock_irq(&pgdat->lru_lock); + if (__munlock_isolate_lru_page(page, lruvec, true)) { + unlock_page_lruvec_irq(lruvec); __munlock_isolated_page(page); goto out; } __munlock_isolation_failed(page); unlock_out: - spin_unlock_irq(&pgdat->lru_lock); + unlock_page_lruvec_irq(lruvec); out: return nr_pages - 1; @@ -291,28 +289,29 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) { int i; int nr = pagevec_count(pvec); - int delta_munlocked = -nr; struct pagevec pvec_putback; + struct lruvec *lruvec = NULL; int pgrescued = 0; pagevec_init(&pvec_putback); /* Phase 1: page isolation */ - spin_lock_irq(&zone->zone_pgdat->lru_lock); for (i = 0; i < nr; i++) { struct page *page = pvec->pages[i]; + lruvec = lock_page_lruvec_irq(page); + if (TestClearPageMlocked(page)) { /* * We already have pin from follow_page_mask() * so we can spare the get_page() here. */ - if (__munlock_isolate_lru_page(page, false)) + if (__munlock_isolate_lru_page(page, lruvec, false)) { + __mod_zone_page_state(zone, NR_MLOCK, -1); + unlock_page_lruvec_irq(lruvec); continue; - else + } else __munlock_isolation_failed(page); - } else { - delta_munlocked++; } /* @@ -323,9 +322,8 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) */ pagevec_add(&pvec_putback, pvec->pages[i]); pvec->pages[i] = NULL; + unlock_page_lruvec_irq(lruvec); } - __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked); - spin_unlock_irq(&zone->zone_pgdat->lru_lock); /* Now we can release pins of pages that we are not munlocking */ pagevec_release(&pvec_putback); diff --git a/mm/mmzone.c b/mm/mmzone.c index 4686fdc23bb9..3750a90ed4a0 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -91,6 +91,7 @@ void lruvec_init(struct lruvec *lruvec) enum lru_list lru; memset(lruvec, 0, sizeof(struct lruvec)); + spin_lock_init(&lruvec->lru_lock); for_each_lru(lru) INIT_LIST_HEAD(&lruvec->lists[lru]); diff --git a/mm/page_idle.c b/mm/page_idle.c index 295512465065..d2d868ca2bf7 100644 --- a/mm/page_idle.c +++ b/mm/page_idle.c @@ -31,7 +31,7 @@ static struct page *page_idle_get_page(unsigned long pfn) { struct page *page; - pg_data_t *pgdat; + struct lruvec *lruvec; if (!pfn_valid(pfn)) return NULL; @@ -41,13 +41,12 @@ static struct page *page_idle_get_page(unsigned long pfn) !get_page_unless_zero(page)) return NULL; - pgdat = page_pgdat(page); - spin_lock_irq(&pgdat->lru_lock); + lruvec = lock_page_lruvec_irq(page); if (unlikely(!PageLRU(page))) { put_page(page); page = NULL; } - spin_unlock_irq(&pgdat->lru_lock); + unlock_page_lruvec_irq(lruvec); return page; } diff --git a/mm/swap.c b/mm/swap.c index 5341ae93861f..97e108be4f92 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -60,16 +60,14 @@ static void __page_cache_release(struct page *page) { if (PageLRU(page)) { - pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; - unsigned long flags; + unsigned long flags = 0; - spin_lock_irqsave(&pgdat->lru_lock, flags); - lruvec = mem_cgroup_page_lruvec(page, pgdat); + lruvec = lock_page_lruvec_irqsave(page, &flags); VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); - spin_unlock_irqrestore(&pgdat->lru_lock, flags); + unlock_page_lruvec_irqrestore(lruvec, flags); } __ClearPageWaiters(page); } @@ -192,26 +190,18 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, void *arg) { int i; - struct pglist_data *pgdat = NULL; - struct lruvec *lruvec; + struct lruvec *lruvec = NULL; unsigned long flags = 0; for (i = 0; i < pagevec_count(pvec); i++) { struct page *page = pvec->pages[i]; - struct pglist_data *pagepgdat = page_pgdat(page); - if (pagepgdat != pgdat) { - if (pgdat) - spin_unlock_irqrestore(&pgdat->lru_lock, flags); - pgdat = pagepgdat; - spin_lock_irqsave(&pgdat->lru_lock, flags); - } + lruvec = lock_page_lruvec_irqsave(page, &flags); - lruvec = mem_cgroup_page_lruvec(page, pgdat); (*move_fn)(page, lruvec, arg); + unlock_page_lruvec_irqrestore(lruvec, flags); } - if (pgdat) - spin_unlock_irqrestore(&pgdat->lru_lock, flags); + release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); } @@ -324,12 +314,12 @@ static inline void activate_page_drain(int cpu) void activate_page(struct page *page) { - pg_data_t *pgdat = page_pgdat(page); + struct lruvec *lruvec; page = compound_head(page); - spin_lock_irq(&pgdat->lru_lock); - __activate_page(page, mem_cgroup_page_lruvec(page, pgdat), NULL); - spin_unlock_irq(&pgdat->lru_lock); + lruvec = lock_page_lruvec_irq(page); + __activate_page(page, lruvec, NULL); + unlock_page_lruvec_irq(lruvec); } #endif @@ -780,8 +770,7 @@ void release_pages(struct page **pages, int nr) { int i; LIST_HEAD(pages_to_free); - struct pglist_data *locked_pgdat = NULL; - struct lruvec *lruvec; + struct lruvec *lruvec = NULL; unsigned long uninitialized_var(flags); unsigned int uninitialized_var(lock_batch); @@ -791,21 +780,20 @@ void release_pages(struct page **pages, int nr) /* * Make sure the IRQ-safe lock-holding time does not get * excessive with a continuous string of pages from the - * same pgdat. The lock is held only if pgdat != NULL. + * same lruvec. The lock is held only if lruvec != NULL. */ - if (locked_pgdat && ++lock_batch == SWAP_CLUSTER_MAX) { - spin_unlock_irqrestore(&locked_pgdat->lru_lock, flags); - locked_pgdat = NULL; + if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) { + unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec = NULL; } if (is_huge_zero_page(page)) continue; if (is_zone_device_page(page)) { - if (locked_pgdat) { - spin_unlock_irqrestore(&locked_pgdat->lru_lock, - flags); - locked_pgdat = NULL; + if (lruvec) { + unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec = NULL; } /* * ZONE_DEVICE pages that return 'false' from @@ -822,27 +810,24 @@ void release_pages(struct page **pages, int nr) continue; if (PageCompound(page)) { - if (locked_pgdat) { - spin_unlock_irqrestore(&locked_pgdat->lru_lock, flags); - locked_pgdat = NULL; + if (lruvec) { + unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec = NULL; } __put_compound_page(page); continue; } if (PageLRU(page)) { - struct pglist_data *pgdat = page_pgdat(page); + struct lruvec *new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); - if (pgdat != locked_pgdat) { - if (locked_pgdat) - spin_unlock_irqrestore(&locked_pgdat->lru_lock, - flags); + if (new_lruvec != lruvec) { + if (lruvec) + unlock_page_lruvec_irqrestore(lruvec, flags); lock_batch = 0; - locked_pgdat = pgdat; - spin_lock_irqsave(&locked_pgdat->lru_lock, flags); + lruvec = lock_page_lruvec_irqsave(page, &flags); } - lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); @@ -854,8 +839,8 @@ void release_pages(struct page **pages, int nr) list_add(&page->lru, &pages_to_free); } - if (locked_pgdat) - spin_unlock_irqrestore(&locked_pgdat->lru_lock, flags); + if (lruvec) + unlock_page_lruvec_irqrestore(lruvec, flags); mem_cgroup_uncharge_list(&pages_to_free); free_unref_page_list(&pages_to_free); @@ -893,7 +878,7 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, VM_BUG_ON_PAGE(!PageHead(page), page); VM_BUG_ON_PAGE(PageCompound(page_tail), page); VM_BUG_ON_PAGE(PageLRU(page_tail), page); - lockdep_assert_held(&lruvec_pgdat(lruvec)->lru_lock); + lockdep_assert_held(&lruvec->lru_lock); if (!list) SetPageLRU(page_tail); diff --git a/mm/vmscan.c b/mm/vmscan.c index a270d32bdb94..7e1cb41da1fb 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1766,11 +1766,9 @@ int isolate_lru_page(struct page *page) WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"); if (PageLRU(page)) { - pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; - spin_lock_irq(&pgdat->lru_lock); - lruvec = mem_cgroup_page_lruvec(page, pgdat); + lruvec = lock_page_lruvec_irq(page); if (PageLRU(page)) { int lru = page_lru(page); get_page(page); @@ -1778,7 +1776,7 @@ int isolate_lru_page(struct page *page) del_page_from_lru_list(page, lruvec, lru); ret = 0; } - spin_unlock_irq(&pgdat->lru_lock); + unlock_page_lruvec_irq(lruvec); } return ret; } @@ -1843,20 +1841,23 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, struct list_head *list) { - struct pglist_data *pgdat = lruvec_pgdat(lruvec); int nr_pages, nr_moved = 0; LIST_HEAD(pages_to_free); struct page *page; enum lru_list lru; while (!list_empty(list)) { + struct mem_cgroup *memcg; + struct lruvec *plv; + bool relocked = false; + page = lru_to_page(list); VM_BUG_ON_PAGE(PageLRU(page), page); list_del(&page->lru); if (unlikely(!page_evictable(page))) { - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); putback_lru_page(page); - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); continue; } @@ -1877,21 +1878,34 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, __ClearPageActive(page); if (unlikely(PageCompound(page))) { - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); (*get_compound_page_dtor(page))(page); - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); } else list_add(&page->lru, &pages_to_free); continue; } - lruvec = mem_cgroup_page_lruvec(page, pgdat); + memcg = lock_page_memcg(page); + plv = mem_cgroup_lruvec(memcg, page_pgdat(page)); + /* page's lruvec changed in memcg moving */ + if (plv != lruvec) { + spin_unlock_irq(&lruvec->lru_lock); + spin_lock_irq(&plv->lru_lock); + relocked = true; + } + lru = page_lru(page); nr_pages = hpage_nr_pages(page); - - update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); - list_add(&page->lru, &lruvec->lists[lru]); + update_lru_size(plv, lru, page_zonenum(page), nr_pages); + list_add(&page->lru, &plv->lists[lru]); nr_moved += nr_pages; + + if (relocked) { + spin_unlock_irq(&plv->lru_lock); + spin_lock_irq(&lruvec->lru_lock); + } + __unlock_page_memcg(memcg); } /* @@ -1949,7 +1963,7 @@ static int current_may_throttle(void) lru_add_drain(); - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); nr_taken = isolate_lru_pages(nr_to_scan, lruvec, &page_list, &nr_scanned, sc, lru); @@ -1961,15 +1975,14 @@ static int current_may_throttle(void) if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_scanned); __count_memcg_events(lruvec_memcg(lruvec), item, nr_scanned); - spin_unlock_irq(&pgdat->lru_lock); - + spin_unlock_irq(&lruvec->lru_lock); if (nr_taken == 0) return 0; nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, 0, &stat, false); - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) @@ -1982,7 +1995,7 @@ static int current_may_throttle(void) __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); mem_cgroup_uncharge_list(&page_list); free_unref_page_list(&page_list); @@ -2035,7 +2048,7 @@ static void shrink_active_list(unsigned long nr_to_scan, lru_add_drain(); - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); nr_taken = isolate_lru_pages(nr_to_scan, lruvec, &l_hold, &nr_scanned, sc, lru); @@ -2046,7 +2059,7 @@ static void shrink_active_list(unsigned long nr_to_scan, __count_vm_events(PGREFILL, nr_scanned); __count_memcg_events(lruvec_memcg(lruvec), PGREFILL, nr_scanned); - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); while (!list_empty(&l_hold)) { cond_resched(); @@ -2092,7 +2105,7 @@ static void shrink_active_list(unsigned long nr_to_scan, /* * Move pages back to the lru list. */ - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); /* * Count referenced pages from currently used mappings as rotated, * even though only some of them are actually re-activated. This @@ -2110,7 +2123,7 @@ static void shrink_active_list(unsigned long nr_to_scan, __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); mem_cgroup_uncharge_list(&l_active); free_unref_page_list(&l_active); @@ -2259,7 +2272,6 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; u64 fraction[2]; u64 denominator = 0; /* gcc */ - struct pglist_data *pgdat = lruvec_pgdat(lruvec); unsigned long anon_prio, file_prio; enum scan_balance scan_balance; unsigned long anon, file; @@ -2337,7 +2349,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, file = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) + lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, MAX_NR_ZONES); - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); if (unlikely(reclaim_stat->recent_scanned[0] > anon / 4)) { reclaim_stat->recent_scanned[0] /= 2; reclaim_stat->recent_rotated[0] /= 2; @@ -2358,7 +2370,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, fp = file_prio * (reclaim_stat->recent_scanned[1] + 1); fp /= reclaim_stat->recent_rotated[1] + 1; - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); fraction[0] = ap; fraction[1] = fp; @@ -4336,24 +4348,21 @@ int page_evictable(struct page *page) */ void check_move_unevictable_pages(struct pagevec *pvec) { - struct lruvec *lruvec; - struct pglist_data *pgdat = NULL; + struct lruvec *lruvec = NULL; int pgscanned = 0; int pgrescued = 0; int i; for (i = 0; i < pvec->nr; i++) { struct page *page = pvec->pages[i]; - struct pglist_data *pagepgdat = page_pgdat(page); + struct lruvec *new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); pgscanned++; - if (pagepgdat != pgdat) { - if (pgdat) - spin_unlock_irq(&pgdat->lru_lock); - pgdat = pagepgdat; - spin_lock_irq(&pgdat->lru_lock); + if (lruvec != new_lruvec) { + if (lruvec) + unlock_page_lruvec_irq(lruvec); + lruvec = lock_page_lruvec_irq(page); } - lruvec = mem_cgroup_page_lruvec(page, pgdat); if (!PageLRU(page) || !PageUnevictable(page)) continue; @@ -4369,10 +4378,10 @@ void check_move_unevictable_pages(struct pagevec *pvec) } } - if (pgdat) { + if (lruvec) { __count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued); __count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); - spin_unlock_irq(&pgdat->lru_lock); + unlock_page_lruvec_irq(lruvec); } } EXPORT_SYMBOL_GPL(check_move_unevictable_pages); From patchwork Thu Jan 16 03:05:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11335927 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 39655138D for ; Thu, 16 Jan 2020 03:05:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0708824671 for ; Thu, 16 Jan 2020 03:05:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0708824671 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 41E7D8E0024; Wed, 15 Jan 2020 22:05:22 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 380268E0026; Wed, 15 Jan 2020 22:05:22 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D6CF8E0024; Wed, 15 Jan 2020 22:05:22 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 017E18E0026 for ; Wed, 15 Jan 2020 22:05:21 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id AFB3E181AEF07 for ; Thu, 16 Jan 2020 03:05:21 +0000 (UTC) X-FDA: 76382006442.13.road62_85f367ff4782c X-Spam-Summary: 2,0,0,b1b1f8c7a7baf571,d41d8cd98f00b204,alex.shi@linux.alibaba.com,:cgroups@vger.kernel.org:linux-kernel@vger.kernel.org::akpm@linux-foundation.org:mgorman@techsingularity.net:tj@kernel.org:hughd@google.com:khlebnikov@yandex-team.ru:daniel.m.jordan@oracle.com:yang.shi@linux.alibaba.com:willy@infradead.org:shakeelb@google.com:hannes@cmpxchg.org:mhocko@kernel.org:vdavydov.dev@gmail.com:guro@fb.com:chris@chrisdown.name:tglx@linutronix.de:vbabka@suse.cz:aryabinin@virtuozzo.com:swkhack@gmail.com:stefan.potyra@elektrobit.com:jgg@ziepe.ca:mchehab+samsung@kernel.org:peng.fan@nxp.com:nborisov@suse.com:ira.weiny@intel.com:ktkhai@virtuozzo.com:laoar.shao@gmail.com,RULES_HIT:41:355:379:541:800:960:973:981:988:989:1260:1261:1345:1359:1431:1437:1535:1543:1711:1730:1747:1777:1792:2198:2199:2393:2553:2559:2562:2898:3138:3139:3140:3141:3142:3167:3354:3865:3868:3871:4321:4605:5007:6261:6737:6742:7514:8957:9207:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12 555:1268 X-HE-Tag: road62_85f367ff4782c X-Filterd-Recvd-Size: 5641 Received: from out30-44.freemail.mail.aliyun.com (out30-44.freemail.mail.aliyun.com [115.124.30.44]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 03:05:20 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R331e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=29;SR=0;TI=SMTPD_---0TnrHs5Q_1579143913; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TnrHs5Q_1579143913) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Jan 2020 11:05:13 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, hannes@cmpxchg.org Cc: Michal Hocko , Vladimir Davydov , Roman Gushchin , Chris Down , Thomas Gleixner , Vlastimil Babka , Andrey Ryabinin , swkhack , "Potyra, Stefan" , Jason Gunthorpe , Mauro Carvalho Chehab , Peng Fan , Nikolay Borisov , Ira Weiny , Kirill Tkhai , Yafang Shao Subject: [PATCH v8 04/10] mm/lru: introduce the relock_page_lruvec function Date: Thu, 16 Jan 2020 11:05:03 +0800 Message-Id: <1579143909-156105-5-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> References: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: During the lruvec locking, a new page's lruvec is may same as previous one. Thus we could save a re-locking, and only change lock iff lruvec is new. Function named relock_page_lruvec following Hugh Dickins' patch. Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Andrew Morton Cc: Roman Gushchin Cc: Shakeel Butt Cc: Chris Down Cc: Thomas Gleixner Cc: Vlastimil Babka Cc: Andrey Ryabinin Cc: swkhack Cc: "Potyra, Stefan" Cc: Jason Gunthorpe Cc: Matthew Wilcox Cc: Mauro Carvalho Chehab Cc: Peng Fan Cc: Nikolay Borisov Cc: Ira Weiny Cc: Kirill Tkhai Cc: Yang Shi Cc: Yafang Shao Cc: Mel Gorman Cc: Konstantin Khlebnikov Cc: Hugh Dickins Cc: Tejun Heo Cc: linux-kernel@vger.kernel.org Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/memcontrol.h | 36 ++++++++++++++++++++++++++++++++++++ mm/vmscan.c | 8 ++------ 2 files changed, 38 insertions(+), 6 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 8389b9b927ef..09e861df48e8 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1299,6 +1299,42 @@ static inline void dec_lruvec_page_state(struct page *page, mod_lruvec_page_state(page, idx, -1); } +/* Don't lock again iff page's lruvec locked */ +static inline struct lruvec *relock_page_lruvec_irq(struct page *page, + struct lruvec *locked_lruvec) +{ + struct pglist_data *pgdat = page_pgdat(page); + struct lruvec *lruvec; + + lruvec = mem_cgroup_page_lruvec(page, pgdat); + + if (likely(locked_lruvec == lruvec)) + return lruvec; + + if (unlikely(locked_lruvec)) + unlock_page_lruvec_irq(locked_lruvec); + + return lock_page_lruvec_irq(page); +} + +/* Don't lock again iff page's lruvec locked */ +static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, + struct lruvec *locked_lruvec, unsigned long *flags) +{ + struct pglist_data *pgdat = page_pgdat(page); + struct lruvec *lruvec; + + lruvec = mem_cgroup_page_lruvec(page, pgdat); + + if (likely(locked_lruvec == lruvec)) + return lruvec; + + if (unlikely(locked_lruvec)) + unlock_page_lruvec_irqrestore(locked_lruvec, *flags); + + return lock_page_lruvec_irqsave(page, flags); +} + #ifdef CONFIG_CGROUP_WRITEBACK struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb); diff --git a/mm/vmscan.c b/mm/vmscan.c index 7e1cb41da1fb..ee20a64a7ccc 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4355,14 +4355,10 @@ void check_move_unevictable_pages(struct pagevec *pvec) for (i = 0; i < pvec->nr; i++) { struct page *page = pvec->pages[i]; - struct lruvec *new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); pgscanned++; - if (lruvec != new_lruvec) { - if (lruvec) - unlock_page_lruvec_irq(lruvec); - lruvec = lock_page_lruvec_irq(page); - } + + lruvec = relock_page_lruvec_irq(page, lruvec); if (!PageLRU(page) || !PageUnevictable(page)) continue; From patchwork Thu Jan 16 03:05:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11335923 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CEB18138D for ; Thu, 16 Jan 2020 03:05:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A5E7624671 for ; Thu, 16 Jan 2020 03:05:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A5E7624671 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B009D8E0022; Wed, 15 Jan 2020 22:05:21 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A38338E0027; Wed, 15 Jan 2020 22:05:21 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 557868E0022; Wed, 15 Jan 2020 22:05:21 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id 24A538E0023 for ; Wed, 15 Jan 2020 22:05:21 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id E4715180AD817 for ; Thu, 16 Jan 2020 03:05:20 +0000 (UTC) X-FDA: 76382006400.18.sign22_85e64ac49d35f X-Spam-Summary: 2,0,0,d5a845ab9ebc20b8,d41d8cd98f00b204,alex.shi@linux.alibaba.com,:cgroups@vger.kernel.org:linux-kernel@vger.kernel.org::akpm@linux-foundation.org:mgorman@techsingularity.net:tj@kernel.org:hughd@google.com:khlebnikov@yandex-team.ru:daniel.m.jordan@oracle.com:yang.shi@linux.alibaba.com:willy@infradead.org:shakeelb@google.com:hannes@cmpxchg.org,RULES_HIT:41:355:379:541:800:960:973:981:988:989:1260:1261:1345:1359:1381:1431:1437:1534:1542:1711:1730:1747:1777:1792:2393:2553:2559:2562:2898:3138:3139:3140:3141:3142:3353:3865:3866:3868:3870:3871:3874:4321:5007:6261:6737:7901:7903:8957:10004:11026:11658:11914:12043:12048:12296:12297:12438:12555:12895:13846:14181:14394:14721:14915:21060:21080:21451:21627:21987:30054:30070:30090,0,RBL:115.124.30.56:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: sign22_85e64ac49d35f X-Filterd-Recvd-Size: 3513 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 03:05:19 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07488;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0TnrFONZ_1579143913; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TnrFONZ_1579143913) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Jan 2020 11:05:13 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, hannes@cmpxchg.org Subject: [PATCH v8 05/10] mm/mlock: optimize munlock_pagevec by relocking Date: Thu, 16 Jan 2020 11:05:04 +0800 Message-Id: <1579143909-156105-6-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> References: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: During the pagevec locking, a new page's lruvec is may same as previous one. Thus we could save a re-locking, and only change lock iff lruvec is newer. Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Hugh Dickins Cc: linux-kernel@vger.kernel.org Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: Andrew Morton --- mm/mlock.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/mm/mlock.c b/mm/mlock.c index 10d15f58b061..050f999eadb1 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -289,6 +289,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) { int i; int nr = pagevec_count(pvec); + int delta_munlocked = -nr; struct pagevec pvec_putback; struct lruvec *lruvec = NULL; int pgrescued = 0; @@ -299,20 +300,19 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) for (i = 0; i < nr; i++) { struct page *page = pvec->pages[i]; - lruvec = lock_page_lruvec_irq(page); + lruvec = relock_page_lruvec_irq(page, lruvec); if (TestClearPageMlocked(page)) { /* * We already have pin from follow_page_mask() * so we can spare the get_page() here. */ - if (__munlock_isolate_lru_page(page, lruvec, false)) { - __mod_zone_page_state(zone, NR_MLOCK, -1); - unlock_page_lruvec_irq(lruvec); + if (__munlock_isolate_lru_page(page, lruvec, false)) continue; - } else + else __munlock_isolation_failed(page); - } + } else + delta_munlocked++; /* * We won't be munlocking this page in the next phase @@ -322,8 +322,10 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) */ pagevec_add(&pvec_putback, pvec->pages[i]); pvec->pages[i] = NULL; - unlock_page_lruvec_irq(lruvec); } + __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked); + if (lruvec) + unlock_page_lruvec_irq(lruvec); /* Now we can release pins of pages that we are not munlocking */ pagevec_release(&pvec_putback); From patchwork Thu Jan 16 03:05:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11335933 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 49F6D92A for ; Thu, 16 Jan 2020 03:05:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 17A4D24679 for ; Thu, 16 Jan 2020 03:05:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 17A4D24679 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 863AD8E0029; Wed, 15 Jan 2020 22:05:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7A1388E0026; Wed, 15 Jan 2020 22:05:28 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6413C8E0029; Wed, 15 Jan 2020 22:05:28 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0185.hostedemail.com [216.40.44.185]) by kanga.kvack.org (Postfix) with ESMTP id 457BC8E0026 for ; Wed, 15 Jan 2020 22:05:28 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 1AAE1441C for ; Thu, 16 Jan 2020 03:05:28 +0000 (UTC) X-FDA: 76382006736.22.title30_86f458d479d06 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,alex.shi@linux.alibaba.com,:cgroups@vger.kernel.org:linux-kernel@vger.kernel.org::akpm@linux-foundation.org:mgorman@techsingularity.net:tj@kernel.org:hughd@google.com:khlebnikov@yandex-team.ru:daniel.m.jordan@oracle.com:yang.shi@linux.alibaba.com:willy@infradead.org:shakeelb@google.com:hannes@cmpxchg.org:tglx@linutronix.de:mchehab+samsung@kernel.org:laoar.shao@gmail.com,RULES_HIT:30054:30070,0,RBL:47.88.44.36:@linux.alibaba.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10;47.88.44.36-irl.urbl.hostedemail.com-127.0.0.175,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: title30_86f458d479d06 X-Filterd-Recvd-Size: 3464 Received: from out4436.biz.mail.alibaba.com (out4436.biz.mail.alibaba.com [47.88.44.36]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 03:05:27 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R991e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04407;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0TnrFONq_1579143914; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TnrFONq_1579143914) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Jan 2020 11:05:14 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, hannes@cmpxchg.org Cc: Thomas Gleixner , Mauro Carvalho Chehab , Yafang Shao Subject: [PATCH v8 06/10] mm/swap: only change the lru_lock iff page's lruvec is different Date: Thu, 16 Jan 2020 11:05:05 +0800 Message-Id: <1579143909-156105-7-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> References: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since we introduced relock_page_lruvec, we could use it in more place to reduce spin_locks. Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Andrew Morton Cc: Thomas Gleixner Cc: Matthew Wilcox Cc: Mauro Carvalho Chehab Cc: Yafang Shao Cc: Mel Gorman Cc: Konstantin Khlebnikov Cc: Hugh Dickins Cc: linux-kernel@vger.kernel.org Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org --- mm/swap.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 97e108be4f92..84a845968e1d 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -196,11 +196,12 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, for (i = 0; i < pagevec_count(pvec); i++) { struct page *page = pvec->pages[i]; - lruvec = lock_page_lruvec_irqsave(page, &flags); + lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); (*move_fn)(page, lruvec, arg); - unlock_page_lruvec_irqrestore(lruvec, flags); } + if (lruvec) + unlock_page_lruvec_irqrestore(lruvec, flags); release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); @@ -819,14 +820,11 @@ void release_pages(struct page **pages, int nr) } if (PageLRU(page)) { - struct lruvec *new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); + struct lruvec *pre_lruvec = lruvec; - if (new_lruvec != lruvec) { - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); + if (pre_lruvec != lruvec) lock_batch = 0; - lruvec = lock_page_lruvec_irqsave(page, &flags); - } VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); From patchwork Thu Jan 16 03:05:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11335917 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6321A92A for ; Thu, 16 Jan 2020 03:05:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3AEA8214AF for ; Thu, 16 Jan 2020 03:05:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3AEA8214AF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B72358E0021; Wed, 15 Jan 2020 22:05:20 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AFAE98E001C; Wed, 15 Jan 2020 22:05:20 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9506C8E0021; Wed, 15 Jan 2020 22:05:20 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id 6EDF48E001C for ; Wed, 15 Jan 2020 22:05:20 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 277CC3A91 for ; Thu, 16 Jan 2020 03:05:20 +0000 (UTC) X-FDA: 76382006400.14.noise85_85c23171d4f42 X-Spam-Summary: 2,0,0,84d68cf58583398b,d41d8cd98f00b204,alex.shi@linux.alibaba.com,:cgroups@vger.kernel.org:linux-kernel@vger.kernel.org::akpm@linux-foundation.org:mgorman@techsingularity.net:tj@kernel.org:hughd@google.com:khlebnikov@yandex-team.ru:daniel.m.jordan@oracle.com:yang.shi@linux.alibaba.com:willy@infradead.org:shakeelb@google.com:hannes@cmpxchg.org:vbabka@suse.cz:dan.j.williams@intel.com:mhocko@suse.com:richard.weiyang@gmail.com:arunks@codeaurora.org:osalvador@suse.de:rppt@linux.vnet.ibm.com:alexander.h.duyck@linux.intel.com:pasha.tatashin@soleen.com:glider@google.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1261:1345:1359:1431:1437:1534:1541:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3352:3872:3876:4321:4605:5007:6261:6737:7514:7903:9207:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:13069:13311:13357:13846:14096:14181:14384:14394:14721:14915:21060:21080:21451:21627:30064,0,RBL:115.124.30.133:@linux .alibaba X-HE-Tag: noise85_85c23171d4f42 X-Filterd-Recvd-Size: 3549 Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 03:05:18 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07417;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=23;SR=0;TI=SMTPD_---0TnrFiNF_1579143914; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TnrFiNF_1579143914) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Jan 2020 11:05:14 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, hannes@cmpxchg.org Cc: Vlastimil Babka , Dan Williams , Michal Hocko , Wei Yang , Arun KS , Oscar Salvador , Mike Rapoport , Alexander Duyck , Pavel Tatashin , Alexander Potapenko Subject: [PATCH v8 07/10] mm/pgdat: remove pgdat lru_lock Date: Thu, 16 Jan 2020 11:05:06 +0800 Message-Id: <1579143909-156105-8-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> References: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now pgdat.lru_lock was replaced by lruvec lock. It's not used anymore. Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Vlastimil Babka Cc: Dan Williams Cc: Michal Hocko Cc: Mel Gorman Cc: Wei Yang Cc: Arun KS Cc: Oscar Salvador Cc: Mike Rapoport Cc: Alexander Duyck Cc: Pavel Tatashin Cc: Alexander Potapenko Cc: Konstantin Khlebnikov Cc: Hugh Dickins Cc: Johannes Weiner Cc: Tejun Heo Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Cc: cgroups@vger.kernel.org --- include/linux/mmzone.h | 1 - mm/page_alloc.c | 1 - 2 files changed, 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index c5455675acf2..7db0cec19aa0 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -769,7 +769,6 @@ struct deferred_split { /* Write-intensive fields used by page reclaim */ ZONE_PADDING(_pad1_) - spinlock_t lru_lock; #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT /* diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4785a8a2040e..352f2a3d67b3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6712,7 +6712,6 @@ static void __meminit pgdat_init_internals(struct pglist_data *pgdat) init_waitqueue_head(&pgdat->pfmemalloc_wait); pgdat_page_ext_init(pgdat); - spin_lock_init(&pgdat->lru_lock); lruvec_init(&pgdat->__lruvec); } From patchwork Thu Jan 16 03:05:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11335929 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E458C138D for ; Thu, 16 Jan 2020 03:05:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A04C024679 for ; Thu, 16 Jan 2020 03:05:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A04C024679 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 30CAE8E0027; Wed, 15 Jan 2020 22:05:23 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 298738E0026; Wed, 15 Jan 2020 22:05:23 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 15ED08E0027; Wed, 15 Jan 2020 22:05:23 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0005.hostedemail.com [216.40.44.5]) by kanga.kvack.org (Postfix) with ESMTP id E06738E0026 for ; Wed, 15 Jan 2020 22:05:22 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id B0AAB8248047 for ; Thu, 16 Jan 2020 03:05:22 +0000 (UTC) X-FDA: 76382006484.06.chair18_860b244b7f155 X-Spam-Summary: 2,0,0,584e86b76bb4461e,d41d8cd98f00b204,alex.shi@linux.alibaba.com,:cgroups@vger.kernel.org:linux-kernel@vger.kernel.org::akpm@linux-foundation.org:mgorman@techsingularity.net:tj@kernel.org:hughd@google.com:khlebnikov@yandex-team.ru:daniel.m.jordan@oracle.com:yang.shi@linux.alibaba.com:willy@infradead.org:shakeelb@google.com:hannes@cmpxchg.org:jgg@ziepe.ca:dan.j.williams@intel.com:vbabka@suse.cz:ira.weiny@intel.com:brouer@redhat.com:aryabinin@virtuozzo.com:jannh@google.com:logang@deltatee.com:jrdr.linux@gmail.com:rcampbell@nvidia.com:tobin@kernel.org:mhocko@suse.com:osalvador@suse.de:richard.weiyang@gmail.com:arunks@codeaurora.org:darrick.wong@oracle.com:amir73il@gmail.com:dchinner@redhat.com:josef@toxicpanda.com:kirill.shutemov@linux.intel.com:jglisse@redhat.com:mike.kravetz@oracle.com:ktkhai@virtuozzo.com:laoar.shao@gmail.com,RULES_HIT:4:41:69:152:355:379:541:800:960:966:968:973:988:989:1260:1261:1277:1311:1313:1314:1345:1359:1431:1437:1515:1516:1518:1593:15 94:1605: X-HE-Tag: chair18_860b244b7f155 X-Filterd-Recvd-Size: 16503 Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 03:05:19 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=37;SR=0;TI=SMTPD_---0TnrL8le_1579143915; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TnrL8le_1579143915) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Jan 2020 11:05:15 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, hannes@cmpxchg.org Cc: Jason Gunthorpe , Dan Williams , Vlastimil Babka , Ira Weiny , Jesper Dangaard Brouer , Andrey Ryabinin , Jann Horn , Logan Gunthorpe , Souptick Joarder , Ralph Campbell , "Tobin C. Harding" , Michal Hocko , Oscar Salvador , Wei Yang , Arun KS , "Darrick J. Wong" , Amir Goldstein , Dave Chinner , Josef Bacik , "Kirill A. Shutemov" , =?utf-8?b?SsOpcsO0?= =?utf-8?b?bWUgR2xpc3Nl?= , Mike Kravetz , Kirill Tkhai , Yafang Shao Subject: [PATCH v8 08/10] mm/lru: revise the comments of lru_lock Date: Thu, 16 Jan 2020 11:05:07 +0800 Message-Id: <1579143909-156105-9-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> References: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Hugh Dickins Since we changed the pgdat->lru_lock to lruvec->lru_lock, it's time to fix the incorrect comments in code. Also fixed some zone->lru_lock comment error from ancient time. etc. Signed-off-by: Hugh Dickins Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Jason Gunthorpe Cc: Dan Williams Cc: Vlastimil Babka Cc: Ira Weiny Cc: Jesper Dangaard Brouer Cc: Andrey Ryabinin Cc: Jann Horn Cc: Logan Gunthorpe Cc: Souptick Joarder Cc: Ralph Campbell Cc: "Tobin C. Harding" Cc: Michal Hocko Cc: Oscar Salvador Cc: Mel Gorman Cc: Wei Yang Cc: Johannes Weiner Cc: Arun KS Cc: Matthew Wilcox Cc: "Darrick J. Wong" Cc: Amir Goldstein Cc: Dave Chinner Cc: Josef Bacik Cc: "Kirill A. Shutemov" Cc: "Jérôme Glisse" Cc: Mike Kravetz Cc: Hugh Dickins Cc: Kirill Tkhai Cc: Daniel Jordan Cc: Yafang Shao Cc: Yang Shi Cc: cgroups@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- Documentation/admin-guide/cgroup-v1/memcg_test.rst | 15 +++------------ Documentation/admin-guide/cgroup-v1/memory.rst | 6 +++--- Documentation/trace/events-kmem.rst | 2 +- Documentation/vm/unevictable-lru.rst | 22 ++++++++-------------- include/linux/mm_types.h | 2 +- include/linux/mmzone.h | 2 +- mm/filemap.c | 4 ++-- mm/rmap.c | 2 +- mm/vmscan.c | 12 ++++++++---- 9 files changed, 28 insertions(+), 39 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v1/memcg_test.rst b/Documentation/admin-guide/cgroup-v1/memcg_test.rst index 3f7115e07b5d..0b9f91589d3d 100644 --- a/Documentation/admin-guide/cgroup-v1/memcg_test.rst +++ b/Documentation/admin-guide/cgroup-v1/memcg_test.rst @@ -133,18 +133,9 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y. 8. LRU ====== - Each memcg has its own private LRU. Now, its handling is under global - VM's control (means that it's handled under global pgdat->lru_lock). - Almost all routines around memcg's LRU is called by global LRU's - list management functions under pgdat->lru_lock. - - A special function is mem_cgroup_isolate_pages(). This scans - memcg's private LRU and call __isolate_lru_page() to extract a page - from LRU. - - (By __isolate_lru_page(), the page is removed from both of global and - private LRU.) - + Each memcg has its own vector of LRUs (inactive anon, active anon, + inactive file, active file, unevictable) of pages from each node, + each LRU handled under a single lru_lock for that memcg and node. 9. Typical Tests. ================= diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst index 0ae4f564c2d6..60d97e8b7f3c 100644 --- a/Documentation/admin-guide/cgroup-v1/memory.rst +++ b/Documentation/admin-guide/cgroup-v1/memory.rst @@ -297,13 +297,13 @@ When oom event notifier is registered, event will be delivered. PG_locked. mm->page_table_lock - pgdat->lru_lock + lruvec->lru_lock lock_page_cgroup. In many cases, just lock_page_cgroup() is called. - per-zone-per-cgroup LRU (cgroup's private LRU) is just guarded by - pgdat->lru_lock, it has no lock of its own. + per-node-per-cgroup LRU (cgroup's private LRU) is just guarded by + lruvec->lru_lock, it has no lock of its own. 2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM) ----------------------------------------------- diff --git a/Documentation/trace/events-kmem.rst b/Documentation/trace/events-kmem.rst index 555484110e36..68fa75247488 100644 --- a/Documentation/trace/events-kmem.rst +++ b/Documentation/trace/events-kmem.rst @@ -69,7 +69,7 @@ When pages are freed in batch, the also mm_page_free_batched is triggered. Broadly speaking, pages are taken off the LRU lock in bulk and freed in batch with a page list. Significant amounts of activity here could indicate that the system is under memory pressure and can also indicate -contention on the zone->lru_lock. +contention on the lruvec->lru_lock. 4. Per-CPU Allocator Activity ============================= diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst index 17d0861b0f1d..0e1490524f53 100644 --- a/Documentation/vm/unevictable-lru.rst +++ b/Documentation/vm/unevictable-lru.rst @@ -33,7 +33,7 @@ reclaim in Linux. The problems have been observed at customer sites on large memory x86_64 systems. To illustrate this with an example, a non-NUMA x86_64 platform with 128GB of -main memory will have over 32 million 4k pages in a single zone. When a large +main memory will have over 32 million 4k pages in a single node. When a large fraction of these pages are not evictable for any reason [see below], vmscan will spend a lot of time scanning the LRU lists looking for the small fraction of pages that are evictable. This can result in a situation where all CPUs are @@ -55,7 +55,7 @@ unevictable, either by definition or by circumstance, in the future. The Unevictable Page List ------------------------- -The Unevictable LRU infrastructure consists of an additional, per-zone, LRU list +The Unevictable LRU infrastructure consists of an additional, per-node, LRU list called the "unevictable" list and an associated page flag, PG_unevictable, to indicate that the page is being managed on the unevictable list. @@ -84,15 +84,9 @@ The unevictable list does not differentiate between file-backed and anonymous, swap-backed pages. This differentiation is only important while the pages are, in fact, evictable. -The unevictable list benefits from the "arrayification" of the per-zone LRU +The unevictable list benefits from the "arrayification" of the per-node LRU lists and statistics originally proposed and posted by Christoph Lameter. -The unevictable list does not use the LRU pagevec mechanism. Rather, -unevictable pages are placed directly on the page's zone's unevictable list -under the zone lru_lock. This allows us to prevent the stranding of pages on -the unevictable list when one task has the page isolated from the LRU and other -tasks are changing the "evictability" state of the page. - Memory Control Group Interaction -------------------------------- @@ -101,8 +95,8 @@ The unevictable LRU facility interacts with the memory control group [aka memory controller; see Documentation/admin-guide/cgroup-v1/memory.rst] by extending the lru_list enum. -The memory controller data structure automatically gets a per-zone unevictable -list as a result of the "arrayification" of the per-zone LRU lists (one per +The memory controller data structure automatically gets a per-node unevictable +list as a result of the "arrayification" of the per-node LRU lists (one per lru_list enum element). The memory controller tracks the movement of pages to and from the unevictable list. @@ -196,7 +190,7 @@ for the sake of expediency, to leave a unevictable page on one of the regular active/inactive LRU lists for vmscan to deal with. vmscan checks for such pages in all of the shrink_{active|inactive|page}_list() functions and will "cull" such pages that it encounters: that is, it diverts those pages to the -unevictable list for the zone being scanned. +unevictable list for the node being scanned. There may be situations where a page is mapped into a VM_LOCKED VMA, but the page is not marked as PG_mlocked. Such pages will make it all the way to @@ -328,7 +322,7 @@ If the page was NOT already mlocked, mlock_vma_page() attempts to isolate the page from the LRU, as it is likely on the appropriate active or inactive list at that time. If the isolate_lru_page() succeeds, mlock_vma_page() will put back the page - by calling putback_lru_page() - which will notice that the page -is now mlocked and divert the page to the zone's unevictable list. If +is now mlocked and divert the page to the node's unevictable list. If mlock_vma_page() is unable to isolate the page from the LRU, vmscan will handle it later if and when it attempts to reclaim the page. @@ -603,7 +597,7 @@ Some examples of these unevictable pages on the LRU lists are: unevictable list in mlock_vma_page(). shrink_inactive_list() also diverts any unevictable pages that it finds on the -inactive lists to the appropriate zone's unevictable list. +inactive lists to the appropriate node's unevictable list. shrink_inactive_list() should only see SHM_LOCK'd pages that became SHM_LOCK'd after shrink_active_list() had moved them to the inactive list, or pages mapped diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 270aa8fd2800..ff08a6a8145c 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -78,7 +78,7 @@ struct page { struct { /* Page cache and anonymous pages */ /** * @lru: Pageout list, eg. active_list protected by - * pgdat->lru_lock. Sometimes used as a generic list + * lruvec->lru_lock. Sometimes used as a generic list * by the page owner. */ struct list_head lru; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 7db0cec19aa0..d73be191e9f8 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -159,7 +159,7 @@ static inline bool free_area_empty(struct free_area *area, int migratetype) struct pglist_data; /* - * zone->lock and the zone lru_lock are two of the hottest locks in the kernel. + * zone->lock and the lru_lock are two of the hottest locks in the kernel. * So add a wild amount of padding here to ensure that they fall into separate * cachelines. There are very few zone structures in the machine, so space * consumption is not a concern here. diff --git a/mm/filemap.c b/mm/filemap.c index bf6aa30be58d..6dcdf06660fb 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -101,8 +101,8 @@ * ->swap_lock (try_to_unmap_one) * ->private_lock (try_to_unmap_one) * ->i_pages lock (try_to_unmap_one) - * ->pgdat->lru_lock (follow_page->mark_page_accessed) - * ->pgdat->lru_lock (check_pte_range->isolate_lru_page) + * ->lruvec->lru_lock (follow_page->mark_page_accessed) + * ->lruvec->lru_lock (check_pte_range->isolate_lru_page) * ->private_lock (page_remove_rmap->set_page_dirty) * ->i_pages lock (page_remove_rmap->set_page_dirty) * bdi.wb->list_lock (page_remove_rmap->set_page_dirty) diff --git a/mm/rmap.c b/mm/rmap.c index b3e381919835..39052794cb46 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -27,7 +27,7 @@ * mapping->i_mmap_rwsem * anon_vma->rwsem * mm->page_table_lock or pte_lock - * pgdat->lru_lock (in mark_page_accessed, isolate_lru_page) + * lruvec->lru_lock (in mark_page_accessed, isolate_lru_page) * swap_lock (in swap_duplicate, swap_info_get) * mmlist_lock (in mmput, drain_mmlist and others) * mapping->private_lock (in __set_page_dirty_buffers) diff --git a/mm/vmscan.c b/mm/vmscan.c index ee20a64a7ccc..2a3fca20d456 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1626,14 +1626,16 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec, } /** - * pgdat->lru_lock is heavily contended. Some of the functions that + * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. + * + * lruvec->lru_lock is heavily contended. Some of the functions that * shrink the lists perform better by taking out a batch of pages * and working on them outside the LRU lock. * * For pagecache intensive workloads, this function is the hottest * spot in the kernel (apart from copy_*_user functions). * - * Appropriate locks must be held before calling this function. + * Lru_lock must be held before calling this function. * * @nr_to_scan: The number of eligible pages to look through on the list. * @lruvec: The LRU vector to pull pages from. @@ -1820,14 +1822,16 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, /* * This moves pages from @list to corresponding LRU list. + * The pages from @list is out of any lruvec, and in the end list reuses as + * pages_to_free list. * * We move them the other way if the page is referenced by one or more * processes, from rmap. * * If the pages are mostly unmapped, the processing is fast and it is - * appropriate to hold zone_lru_lock across the whole operation. But if + * appropriate to hold lru_lock across the whole operation. But if * the pages are mapped, the processing is slow (page_referenced()) so we - * should drop zone_lru_lock around each page. It's impossible to balance + * should drop lru_lock around each page. It's impossible to balance * this, so instead we remove the pages from the LRU while processing them. * It is safe to rely on PG_active against the non-LRU pages in here because * nobody will play with that bit on a non-LRU page. From patchwork Thu Jan 16 03:05:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11335921 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 606AA139A for ; Thu, 16 Jan 2020 03:05:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 36EE7214AF for ; Thu, 16 Jan 2020 03:05:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 36EE7214AF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7AD848E001C; Wed, 15 Jan 2020 22:05:21 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 70A618E0025; Wed, 15 Jan 2020 22:05:21 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CECC8E0024; Wed, 15 Jan 2020 22:05:21 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id 074958E0022 for ; Wed, 15 Jan 2020 22:05:21 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id C27614995F9 for ; Thu, 16 Jan 2020 03:05:20 +0000 (UTC) X-FDA: 76382006400.23.fifth00_85dbc4c2b5f11 X-Spam-Summary: 2,0,0,4c747f11d2a1b796,d41d8cd98f00b204,alex.shi@linux.alibaba.com,:cgroups@vger.kernel.org:linux-kernel@vger.kernel.org::akpm@linux-foundation.org:mgorman@techsingularity.net:tj@kernel.org:hughd@google.com:khlebnikov@yandex-team.ru:daniel.m.jordan@oracle.com:yang.shi@linux.alibaba.com:willy@infradead.org:shakeelb@google.com:hannes@cmpxchg.org:mhocko@kernel.org:vdavydov.dev@gmail.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1261:1345:1359:1431:1437:1534:1543:1711:1730:1747:1777:1792:2393:2559:2562:2901:3138:3139:3140:3141:3142:3354:3866:4321:4605:5007:6114:6119:6261:6642:6737:7514:7903:9040:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13846:14096:14181:14394:14721:14915:21060:21080:21451:21627:21966:21990:30054:30070,0,RBL:115.124.30.54:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0, LFtime:2 X-HE-Tag: fifth00_85dbc4c2b5f11 X-Filterd-Recvd-Size: 4715 Received: from out30-54.freemail.mail.aliyun.com (out30-54.freemail.mail.aliyun.com [115.124.30.54]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 03:05:19 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R891e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04397;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0TnrL8m-_1579143915; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TnrL8m-_1579143915) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Jan 2020 11:05:15 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, hannes@cmpxchg.org Cc: Michal Hocko , Vladimir Davydov Subject: [PATCH v8 09/10] mm/lru: add debug checking for page memcg moving Date: Thu, 16 Jan 2020 11:05:08 +0800 Message-Id: <1579143909-156105-10-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> References: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This debug patch could give some clues if there are sth out of consideration. Hugh Dickins report a bug of this patch, Thanks! Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Hugh Dickins Cc: Michal Hocko Cc: Vladimir Davydov Cc: Andrew Morton Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- include/linux/memcontrol.h | 5 +++++ mm/compaction.c | 2 ++ mm/memcontrol.c | 15 +++++++++++++++ 3 files changed, 22 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 09e861df48e8..ece88bb11d0f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -421,6 +421,7 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, struct lruvec *lock_page_lruvec_irqsave(struct page *, unsigned long*); void unlock_page_lruvec_irq(struct lruvec *); void unlock_page_lruvec_irqrestore(struct lruvec *, unsigned long); +void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page); struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); @@ -1183,6 +1184,10 @@ static inline void count_memcg_page_event(struct page *page, void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { } + +static inline void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) +{ +} #endif /* CONFIG_MEMCG */ /* idx can be of type enum memcg_stat_item or node_stat_item */ diff --git a/mm/compaction.c b/mm/compaction.c index 8c0a2da217d8..151242817bf4 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -971,6 +971,8 @@ static bool too_many_isolated(pg_data_t *pgdat) compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked_lruvec = lruvec; + lruvec_memcg_debug(lruvec, page); + /* Try get exclusive access under lock */ if (!skip_updated) { skip_updated = true; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 00fef8ddbd08..a567fd868739 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1238,6 +1238,19 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd return lruvec; } +void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) +{ +#ifdef CONFIG_DEBUG_VM + if (mem_cgroup_disabled()) + return; + + if (!page->mem_cgroup) + VM_BUG_ON_PAGE(lruvec_memcg(lruvec) != root_mem_cgroup, page); + else + VM_BUG_ON_PAGE(lruvec_memcg(lruvec) != page->mem_cgroup, page); +#endif +} + struct lruvec *lock_page_lruvec_irq(struct page *page) { struct lruvec *lruvec; @@ -1247,6 +1260,7 @@ struct lruvec *lock_page_lruvec_irq(struct page *page) lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page)); spin_lock_irq(&lruvec->lru_lock); + lruvec_memcg_debug(lruvec, page); return lruvec; } @@ -1259,6 +1273,7 @@ struct lruvec *lock_page_lruvec_irqsave(struct page *page, unsigned long *flags) lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page)); spin_lock_irqsave(&lruvec->lru_lock, *flags); + lruvec_memcg_debug(lruvec, page); return lruvec; } From patchwork Thu Jan 16 03:05:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11335925 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B96C92A for ; Thu, 16 Jan 2020 03:05:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E687D24671 for ; Thu, 16 Jan 2020 03:05:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E687D24671 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D47158E0025; Wed, 15 Jan 2020 22:05:21 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CAC4C8E0024; Wed, 15 Jan 2020 22:05:21 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C3CA8E0026; Wed, 15 Jan 2020 22:05:21 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0068.hostedemail.com [216.40.44.68]) by kanga.kvack.org (Postfix) with ESMTP id 66BE58E0024 for ; Wed, 15 Jan 2020 22:05:21 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 318314995F9 for ; Thu, 16 Jan 2020 03:05:21 +0000 (UTC) X-FDA: 76382006442.29.smoke82_85ee40fd25656 X-Spam-Summary: 2,0,0,ba65b831f1562873,d41d8cd98f00b204,alex.shi@linux.alibaba.com,:cgroups@vger.kernel.org:linux-kernel@vger.kernel.org::akpm@linux-foundation.org:mgorman@techsingularity.net:tj@kernel.org:hughd@google.com:khlebnikov@yandex-team.ru:daniel.m.jordan@oracle.com:yang.shi@linux.alibaba.com:willy@infradead.org:shakeelb@google.com:hannes@cmpxchg.org:mhocko@kernel.org:vdavydov.dev@gmail.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1261:1345:1359:1431:1437:1534:1541:1711:1714:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3351:3868:3872:5007:6261:6737:7514:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:13069:13311:13357:13846:14096:14181:14384:14394:14721:14915:21060:21080:21450:21451:21627:21990:30054:30070,0,RBL:115.124.30.44:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LU A_SUMMAR X-HE-Tag: smoke82_85ee40fd25656 X-Filterd-Recvd-Size: 2471 Received: from out30-44.freemail.mail.aliyun.com (out30-44.freemail.mail.aliyun.com [115.124.30.44]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 03:05:19 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07484;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0TnrFiO5_1579143916; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TnrFiO5_1579143916) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Jan 2020 11:05:16 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, hannes@cmpxchg.org Cc: Michal Hocko , Vladimir Davydov Subject: [PATCH v8 10/10] mm/memcg: add debug checking in lock_page_memcg Date: Thu, 16 Jan 2020 11:05:09 +0800 Message-Id: <1579143909-156105-11-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> References: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This extra irq disable/enable and BUG_ON checking costs 5% readtwice performance on a 2 socket * 26 cores * HT box. So put it into CONFIG_PROVE_LOCKING. Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Cc: Andrew Morton Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/memcontrol.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index a567fd868739..4ad1b4d2eb1e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2029,6 +2029,12 @@ struct mem_cgroup *lock_page_memcg(struct page *page) if (unlikely(!memcg)) return NULL; +#ifdef CONFIG_PROVE_LOCKING + local_irq_save(flags); + might_lock(&memcg->move_lock); + local_irq_restore(flags); +#endif + if (atomic_read(&memcg->moving_account) <= 0) return memcg;