From patchwork Mon Aug 24 12:55:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11733159 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 489F313B1 for ; Mon, 24 Aug 2020 12:56:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1E98421775 for ; Mon, 24 Aug 2020 12:56:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E98421775 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 55DFE8D0017; Mon, 24 Aug 2020 08:55:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4C2C58D0019; Mon, 24 Aug 2020 08:55:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2026D8D0018; Mon, 24 Aug 2020 08:55:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id E3E4D8D0015 for ; Mon, 24 Aug 2020 08:55:36 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 65D41181AEF1F for ; Mon, 24 Aug 2020 12:55:36 +0000 (UTC) X-FDA: 77185458672.20.tramp92_290257c27053 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 278CD180C07AF for ; Mon, 24 Aug 2020 12:55:36 +0000 (UTC) X-Spam-Summary: 1,0,0,b0b67d1cfa831248,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1261:1345:1359:1431:1437:1535:1544:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3355:3865:3866:3868:3870:3871:3872:4250:4321:4385:5007:6261:6737:6738:7576:7903:8957:9592:10004:11026:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12555:12679:12683:12895:13153:13228:13846:13868:14096:14181:14394:14721:14915:21060:21080:21451:21611:21627:21987:30054,0,RBL:115.124.30.42:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201;04y8nww66saxjm5x1kh9je4a5ghkeypwhjqxncq8cgxfzky9w6k8wxu59u4rcim.wmhuhm6kpcuh9c7rrndkwbsfx71xy3kkxtydwzbzi78ytexqnqr84d8iptr68uc.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: tramp92_290257c27053 X-Filterd-Recvd-Size: 5873 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Aug 2020 12:55:34 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R411e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04455;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=23;SR=0;TI=SMTPD_---0U6k9-bl_1598273712; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U6k9-bl_1598273712) by smtp.aliyun-inc.com(127.0.0.1); Mon, 24 Aug 2020 20:55:23 +0800 From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com Cc: Alexander Duyck , Stephen Rothwell Subject: [PATCH v18 28/32] mm/compaction: Drop locked from isolate_migratepages_block Date: Mon, 24 Aug 2020 20:55:01 +0800 Message-Id: <1598273705-69124-29-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598273705-69124-1-git-send-email-alex.shi@linux.alibaba.com> References: <1598273705-69124-1-git-send-email-alex.shi@linux.alibaba.com> X-Rspamd-Queue-Id: 278CD180C07AF X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alexander Duyck We can drop the need for the locked variable by making use of the lruvec_holds_page_lru_lock function. By doing this we can avoid some rcu locking ugliness for the case where the lruvec is still holding the LRU lock associated with the page. Instead we can just use the lruvec and if it is NULL we assume the lock was released. Signed-off-by: Alexander Duyck Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Stephen Rothwell Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- mm/compaction.c | 46 +++++++++++++++++++++------------------------- 1 file changed, 21 insertions(+), 25 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index b724eacf6421..6bf5ccd8fcf6 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -803,9 +803,8 @@ static bool too_many_isolated(pg_data_t *pgdat) { pg_data_t *pgdat = cc->zone->zone_pgdat; unsigned long nr_scanned = 0, nr_isolated = 0; - struct lruvec *lruvec; + struct lruvec *lruvec = NULL; unsigned long flags = 0; - struct lruvec *locked = NULL; struct page *page = NULL, *valid_page = NULL; unsigned long start_pfn = low_pfn; bool skip_on_failure = false; @@ -866,9 +865,9 @@ static bool too_many_isolated(pg_data_t *pgdat) * a fatal signal is pending. */ if (!(low_pfn % SWAP_CLUSTER_MAX)) { - if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); - locked = NULL; + if (lruvec) { + unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec = NULL; } if (fatal_signal_pending(current)) { @@ -949,9 +948,9 @@ static bool too_many_isolated(pg_data_t *pgdat) */ if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { - if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); - locked = NULL; + if (lruvec) { + unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec = NULL; } if (!isolate_movable_page(page, isolate_mode)) @@ -992,16 +991,14 @@ static bool too_many_isolated(pg_data_t *pgdat) if (!TestClearPageLRU(page)) goto isolate_fail_put; - rcu_read_lock(); - lruvec = mem_cgroup_page_lruvec(page, pgdat); - /* If we already hold the lock, we can skip some rechecking */ - if (lruvec != locked) { - if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + if (!lruvec || !lruvec_holds_page_lru_lock(page, lruvec)) { + if (lruvec) + unlock_page_lruvec_irqrestore(lruvec, flags); + rcu_read_lock(); + lruvec = mem_cgroup_page_lruvec(page, pgdat); compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); - locked = lruvec; rcu_read_unlock(); lruvec_memcg_debug(lruvec, page); @@ -1023,8 +1020,7 @@ static bool too_many_isolated(pg_data_t *pgdat) SetPageLRU(page); goto isolate_fail_put; } - } else - rcu_read_unlock(); + } /* The whole page is taken off the LRU; skip the tail pages. */ if (PageCompound(page)) @@ -1057,9 +1053,9 @@ static bool too_many_isolated(pg_data_t *pgdat) isolate_fail_put: /* Avoid potential deadlock in freeing page under lru_lock */ - if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); - locked = NULL; + if (lruvec) { + unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec = NULL; } put_page(page); @@ -1073,9 +1069,9 @@ static bool too_many_isolated(pg_data_t *pgdat) * page anyway. */ if (nr_isolated) { - if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); - locked = NULL; + if (lruvec) { + unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec = NULL; } putback_movable_pages(&cc->migratepages); cc->nr_migratepages = 0; @@ -1102,8 +1098,8 @@ static bool too_many_isolated(pg_data_t *pgdat) page = NULL; isolate_abort: - if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + if (lruvec) + unlock_page_lruvec_irqrestore(lruvec, flags); if (page) { SetPageLRU(page); put_page(page);