From patchwork Wed May 27 18:29:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 11573759 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 27CFA1392 for ; Wed, 27 May 2020 18:30:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E75D620890 for ; Wed, 27 May 2020 18:30:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nrll2XWC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E75D620890 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2F500800B7; Wed, 27 May 2020 14:30:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2A4F480010; Wed, 27 May 2020 14:30:05 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16AD4800B7; Wed, 27 May 2020 14:30:05 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0222.hostedemail.com [216.40.44.222]) by kanga.kvack.org (Postfix) with ESMTP id EEA9580010 for ; Wed, 27 May 2020 14:30:04 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A25B2181AC537 for ; Wed, 27 May 2020 18:30:04 +0000 (UTC) X-FDA: 76863338328.25.swing17_210c32d26d54 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 7DF741804E3BB for ; Wed, 27 May 2020 18:30:04 +0000 (UTC) X-Spam-Summary: 2,0,0,3ef6d6e0dc6e6f5d,d41d8cd98f00b204,3qrhoxggkceuzohrllsinvvnsl.jvtspu14-ttr2hjr.vyn@flex--shakeelb.bounces.google.com,,RULES_HIT:41:69:152:355:379:541:800:960:966:968:973:988:989:1260:1277:1313:1314:1345:1437:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2890:3138:3139:3140:3141:3142:3152:3354:3865:3866:3868:3870:3871:3874:4042:4117:4250:4321:4385:4605:5007:6119:6261:6653:7576:7903:8957:9010:9592:9969:10004:10400:10450:10455:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12555:12679:12895:12986:13161:13229:14096:14097:14181:14394:14659:14721:19904:19999:21080:21444:21450:21451:21627:21987:30001:30034:30054,0,RBL:209.85.210.201:@flex--shakeelb.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: swing17_210c32d26d54 X-Filterd-Recvd-Size: 6432 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Wed, 27 May 2020 18:30:03 +0000 (UTC) Received: by mail-pf1-f201.google.com with SMTP id w5so1766074pfb.17 for ; Wed, 27 May 2020 11:30:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=Itmi6eqCHr9uvU0alfk4nQ2taw2pdtsFaFOl7Jh0s5M=; b=nrll2XWCUSepzWcwbG/FSimgFfkZXbJjwr/+w59jtgR+KjTU6RL0pZMHypgM3IFE3S Zr6/UzLotOHtYDU4au41TaQbpzUuZ6GI9aDCobu5dk8GNHBlFWbS3wy5BhmxT3K5bR46 XHOJvLYl1uj1tT2fJLk6FBoDZ9rDFJ+CGctQdK6bnb/Nb+MeoWPElNifRCBb4bVZd4Gp S+JjXN8vUYJVydYSA7PfYqT1qzS+T/h3fgnINQ1guyfQYIKFBJAXGmaae/qKLlYtA3lf 1xQUw/plmnLxrXjyUr7XsiTXKilrlgdPbVkXXB0YE97gOZ62fMRviivn0lHRvVSOYdDA Gfgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=Itmi6eqCHr9uvU0alfk4nQ2taw2pdtsFaFOl7Jh0s5M=; b=aC07OdKZtLpxzjYWJkB91NcMEBvJ992LvV44EaBUPQ5bCK64KsjD3HTYJjEUtwdKxE kBEfZklXX7hUt97m/R3D09lRnh+78awde7wqS0v35wnim4bK9tho2QHZqham7mEVAJ8t kDIrOB9lIDuBw/bWNqcPuxd7u7jv+Floy2/uJfIGetVEnD+2tfDMZ71GQy/lq6FBjgDX 0r5myy+TlC8z+8N9TUVy4rprrpAGxbUUd/oyZUUI72OrjcGhK88+NBamCW/awXtEUnK2 eeQ6LH0uTrz1qHu82SmRhRYoSay8qpZmz/43Ppuc627kyJtFme6rGmwMMEO9Gbv5haMY kcMQ== X-Gm-Message-State: AOAM532+IeF2PFF0zUW/RPqpJLMZha3kiAuNrmWuUMnz+9HaNZkQqfgQ b2tNyJCbdgR9DIB06imiMps8Q2YCEXqVow== X-Google-Smtp-Source: ABdhPJxb+dE1b5FLu4Xh6+Iw2WjBon3ufZnDa8XSrtrtTj40i1Ffv9vDQWSpMlZ6r/OoQ6KSzNTGI9WU1PZXgQ== X-Received: by 2002:a65:66d5:: with SMTP id c21mr5132870pgw.155.1590604202631; Wed, 27 May 2020 11:30:02 -0700 (PDT) Date: Wed, 27 May 2020 11:29:58 -0700 Message-Id: <20200527182958.252402-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.27.0.rc0.183.gde8f92d652-goog Subject: [PATCH resend 3/3] mm: fix LRU balancing effect of new transparent huge pages From: Shakeel Butt To: Mel Gorman , Johannes Weiner , Roman Gushchin , Michal Hocko Cc: Andrew Morton , Minchan Kim , Rik van Riel , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt X-Rspamd-Queue-Id: 7DF741804E3BB X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Johannes Weiner Currently, THP are counted as single pages until they are split right before being swapped out. However, at that point the VM is already in the middle of reclaim, and adjusting the LRU balance then is useless. Always account THP by the number of basepages, and remove the fixup from the splitting path. Signed-off-by: Johannes Weiner Signed-off-by: Shakeel Butt --- mm/swap.c | 23 +++++++++-------------- 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 4eb179ee0b72..b75c0ce90418 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -262,14 +262,14 @@ void rotate_reclaimable_page(struct page *page) } } -static void update_page_reclaim_stat(struct lruvec *lruvec, - int file, int rotated) +static void update_page_reclaim_stat(struct lruvec *lruvec, int file, + int rotated, int nr_pages) { struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; - reclaim_stat->recent_scanned[file]++; + reclaim_stat->recent_scanned[file] += nr_pages; if (rotated) - reclaim_stat->recent_rotated[file]++; + reclaim_stat->recent_rotated[file] += nr_pages; } static void __activate_page(struct page *page, struct lruvec *lruvec, @@ -288,7 +288,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, __count_vm_events(PGACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGACTIVATE, nr_pages); - update_page_reclaim_stat(lruvec, file, 1); + update_page_reclaim_stat(lruvec, file, 1, nr_pages); } } @@ -546,7 +546,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGDEACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); } - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, nr_pages); } static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, @@ -564,7 +564,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGDEACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_pages); - update_page_reclaim_stat(lruvec, file, 0); + update_page_reclaim_stat(lruvec, file, 0, nr_pages); } } @@ -590,7 +590,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, __count_vm_events(PGLAZYFREE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, nr_pages); - update_page_reclaim_stat(lruvec, 1, 0); + update_page_reclaim_stat(lruvec, 1, 0, nr_pages); } } @@ -899,8 +899,6 @@ EXPORT_SYMBOL(__pagevec_release); void lru_add_page_tail(struct page *page, struct page *page_tail, struct lruvec *lruvec, struct list_head *list) { - const int file = 0; - VM_BUG_ON_PAGE(!PageHead(page), page); VM_BUG_ON_PAGE(PageCompound(page_tail), page); VM_BUG_ON_PAGE(PageLRU(page_tail), page); @@ -926,9 +924,6 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, add_page_to_lru_list_tail(page_tail, lruvec, page_lru(page_tail)); } - - if (!PageUnevictable(page)) - update_page_reclaim_stat(lruvec, file, PageActive(page_tail)); } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ @@ -973,7 +968,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, if (page_evictable(page)) { lru = page_lru(page); update_page_reclaim_stat(lruvec, page_is_file_lru(page), - PageActive(page)); + PageActive(page), nr_pages); if (was_unevictable) __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else {