From patchwork Mon Mar 2 11:00:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11415267 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 262A6138D for ; Mon, 2 Mar 2020 11:01:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EDE93246D6 for ; Mon, 2 Mar 2020 11:01:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EDE93246D6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D70936B0006; Mon, 2 Mar 2020 06:01:04 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CFC506B0007; Mon, 2 Mar 2020 06:01:04 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE85C6B0008; Mon, 2 Mar 2020 06:01:04 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id A73C46B0006 for ; Mon, 2 Mar 2020 06:01:04 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7708E8245578 for ; Mon, 2 Mar 2020 11:01:04 +0000 (UTC) X-FDA: 76550130048.01.voice81_7daa42b846262 X-Spam-Summary: 2,0,0,0186c851007bef5b,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1261:1345:1359:1431:1437:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3867:3868:4250:4385:5007:6261:6737:8957:9010:9592:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12679:12895:13161:13229:13846:14096:14181:14394:14721:14915:21060:21080:21451:21627:21987:30012:30054,0,RBL:115.124.30.42:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: voice81_7daa42b846262 X-Filterd-Recvd-Size: 5148 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Mon, 2 Mar 2020 11:01:03 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R421e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04446;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0TrR9JV8_1583146855; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TrR9JV8_1583146855) by smtp.aliyun-inc.com(127.0.0.1); Mon, 02 Mar 2020 19:00:56 +0800 From: Alex Shi To: cgroups@vger.kernel.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com Cc: Alex Shi , "Kirill A. Shutemov" , Andrea Arcangeli , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v9 06/20] mm/thp: narrow lru locking Date: Mon, 2 Mar 2020 19:00:16 +0800 Message-Id: <1583146830-169516-7-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1583146830-169516-1-git-send-email-alex.shi@linux.alibaba.com> References: <1583146830-169516-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Lru locking just guard the lru list and subpage's Mlocked. Including other things can't give help just delay the locking release. So narrow the locking for early lock release and better code meaning. Signed-off-by: Alex Shi Cc: Kirill A. Shutemov Cc: Andrea Arcangeli Cc: Johannes Weiner Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/huge_memory.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 599367d25fca..3835f87d03fd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2542,13 +2542,14 @@ static void __split_huge_page_tail(struct page *head, int tail, } static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned long flags) + pgoff_t end) { struct page *head = compound_head(page); pg_data_t *pgdat = page_pgdat(head); struct lruvec *lruvec; struct address_space *swap_cache = NULL; unsigned long offset = 0; + unsigned long flags; int i; lruvec = mem_cgroup_page_lruvec(head, pgdat); @@ -2564,6 +2565,9 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_lock(&swap_cache->i_pages); } + /* Lru list would be changed, don't care head's LRU bit. */ + spin_lock_irqsave(&pgdat->lru_lock, flags); + for (i = HPAGE_PMD_NR - 1; i >= 1; i--) { __split_huge_page_tail(head, i, lruvec, list); /* Some pages can be beyond i_size: drop them from page cache */ @@ -2581,6 +2585,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, head + i, 0); } } + spin_unlock_irqrestore(&pgdat->lru_lock, flags); ClearPageCompound(head); @@ -2601,8 +2606,6 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_unlock(&head->mapping->i_pages); } - spin_unlock_irqrestore(&pgdat->lru_lock, flags); - remap_page(head); for (i = 0; i < HPAGE_PMD_NR; i++) { @@ -2740,13 +2743,11 @@ bool can_split_huge_page(struct page *page, int *pextra_pins) int split_huge_page_to_list(struct page *page, struct list_head *list) { struct page *head = compound_head(page); - struct pglist_data *pgdata = NODE_DATA(page_to_nid(head)); struct deferred_split *ds_queue = get_deferred_split_queue(head); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int count, mapcount, extra_pins, ret; bool mlocked; - unsigned long flags; pgoff_t end; VM_BUG_ON_PAGE(is_huge_zero_page(head), head); @@ -2812,9 +2813,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (mlocked) lru_add_drain(); - /* prevent PageLRU to go away from under us, and freeze lru stats */ - spin_lock_irqsave(&pgdata->lru_lock, flags); - if (mapping) { XA_STATE(xas, &mapping->i_pages, page_index(head)); @@ -2844,7 +2842,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) __dec_node_page_state(head, NR_FILE_THPS); } - __split_huge_page(page, list, end, flags); + __split_huge_page(page, list, end); if (PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; @@ -2863,7 +2861,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) spin_unlock(&ds_queue->split_queue_lock); fail: if (mapping) xa_unlock(&mapping->i_pages); - spin_unlock_irqrestore(&pgdata->lru_lock, flags); remap_page(head); ret = -EBUSY; }