From patchwork Fri Mar 20 02:51:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenqiwu X-Patchwork-Id: 11448297 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 852B917E6 for ; Fri, 20 Mar 2020 02:51:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 44AE020754 for ; Fri, 20 Mar 2020 02:51:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BiUs94Xh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 44AE020754 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 43B266B0003; Thu, 19 Mar 2020 22:51:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3EAFA6B0006; Thu, 19 Mar 2020 22:51:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FFEC6B0007; Thu, 19 Mar 2020 22:51:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0036.hostedemail.com [216.40.44.36]) by kanga.kvack.org (Postfix) with ESMTP id 1A3676B0003 for ; Thu, 19 Mar 2020 22:51:52 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 14D378249980 for ; Fri, 20 Mar 2020 02:51:52 +0000 (UTC) X-FDA: 76614215664.19.shirt06_6793c7bb5c048 X-Spam-Summary: 2,0,0,97bf3b6575b9fcfb,d41d8cd98f00b204,qiwuchen55@gmail.com,,RULES_HIT:2:41:69:355:379:541:800:960:966:968:973:981:988:989:1260:1345:1437:1535:1605:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2898:3138:3139:3140:3141:3142:3608:3865:3866:3867:3868:3870:3871:3872:3874:4050:4120:4250:4385:5007:6119:6261:6653:7576:7903:8603:8957:9413:10004:10241:11026:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13161:13229:13972:14096:14394:21080:21324:21325:21444:21451:21627:21666:21939:21966:21990:30012:30054:30070,0,RBL:209.85.216.68:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: shirt06_6793c7bb5c048 X-Filterd-Recvd-Size: 9727 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Mar 2020 02:51:51 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id ng8so1853883pjb.2 for ; Thu, 19 Mar 2020 19:51:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=hmoZpATmQu8rsy9tH2XmfXSwYfLL/qLO2ierTppiBCY=; b=BiUs94XhDG+CbKrwg4WZwc+Tq6057Xfjc1zFTG9fNpFg/uzvm9M5dNv0xzqmcNZoW8 H/r+ZW1mKiS1ap98ADCV6mXl/tdYXbEn74oxAb/H7JjHiErS9ZKxwxPSVzFUzfVNe8sH BuJmDz+EbxQxy1D8rhJNAuh5y6mGPLMYIMxhC2Fbqn7gbpaN3+GlQK3r1eBCHHXNo2WT YdPMuxQBgzdh3WC52A9zsBvoLkeBky1ctyvx7Zt3RwD5FJ0nPySrRse/Kyn1s4vzlor3 sO8ZZM8D9WYIvGGFaSzKqP5qjmz7nKB++JNffQM3jgC9AcW+xvQY3iM76OG4lJtMjBbZ J1HQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=hmoZpATmQu8rsy9tH2XmfXSwYfLL/qLO2ierTppiBCY=; b=X065GrKnchkgibiW5xOiVcQes2dCaryDr/AzDhIJB9MoPq13kajRR77080yH9dQ0EA 64cIhIjFEQrEpBMFVG1QnHxxHQJQel8tHxaPNCcFpKw0LV1eLix7ptum06JZA6dERRoU avuHrKHvb83nZTdC8g1vv34vjAVkTYJS7Ij+ZAfxBCgSEogaUmn+q07cbxu9tQhhdwEc ZXn+58C0GTYdgvmayd6ez5wnILdsU4W9MntvWxZbFo8W6Yqwmjz46rxGcp42Nyy+KZda 73me7NE325H8k4l2L0mMPtq1oJtdOJdXHNUJwp7MebGBh72h9t2Fw6LAn5k2uFfUNXyA IDVA== X-Gm-Message-State: ANhLgQ2tnWlmJbJhzeUtP+hcSGt5L05tPD1LhZjAsC4mn4FYA9Z1sGsU bWOTxw1Y2IeajWzuBd+80gE= X-Google-Smtp-Source: ADFU+vtw8zEjeinC9YCQzC9Ns9hfoNR9M1nLFRnCR2AHKJ7FvKs9JbEV8cYWoWR61lfr3NedBA0yuw== X-Received: by 2002:a17:902:8608:: with SMTP id f8mr6568785plo.110.1584672710446; Thu, 19 Mar 2020 19:51:50 -0700 (PDT) Received: from localhost ([43.224.245.179]) by smtp.gmail.com with ESMTPSA id my15sm2923248pjb.28.2020.03.19.19.51.49 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Thu, 19 Mar 2020 19:51:49 -0700 (PDT) From: qiwuchen55@gmail.com To: chris@chrisdown.name, akpm@linux-foundation.org, mhocko@kernel.org, willy@infradead.org Cc: linux-mm@kvack.org, chenqiwu Subject: [PATCH v2] mm/vmscan: return target_mem_cgroup from cgroup_reclaim() Date: Fri, 20 Mar 2020 10:51:45 +0800 Message-Id: <1584672705-1344-1-git-send-email-qiwuchen55@gmail.com> X-Mailer: git-send-email 1.9.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: chenqiwu Previously the code splits apart the action of checking whether we are in cgroup reclaim from getting the target memory cgroup for that reclaim. This split is confusing and seems unnecessary, since cgroup_reclaim itself only checks if sc->target_mem_cgroup is NULL or not, so merge the two use cases into one by returning the target_mem_cgroup itself from cgroup_reclaim. As a result, sc->target_mem_cgroup is just used when CONFIG_MEMCG is set, so wrap it with ifdef CONFIG_MEMCG in struct scan_control will save some space for those who build their kernels with memory cgroups disabled. Signed-off-by: chenqiwu --- mm/vmscan.c | 41 +++++++++++++++++++++++------------------ 1 file changed, 23 insertions(+), 18 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index dca623d..eb9155e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -73,11 +73,13 @@ struct scan_control { */ nodemask_t *nodemask; +#ifdef CONFIG_MEMCG /* * The memory cgroup that hit its limit and as a result is the * primary target of this reclaim invocation. */ struct mem_cgroup *target_mem_cgroup; +#endif /* Can active pages be deactivated as part of reclaim? */ #define DEACTIVATE_ANON 1 @@ -238,7 +240,7 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) up_write(&shrinker_rwsem); } -static bool cgroup_reclaim(struct scan_control *sc) +static struct mem_cgroup *cgroup_reclaim(struct scan_control *sc) { return sc->target_mem_cgroup; } @@ -276,9 +278,9 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) { } -static bool cgroup_reclaim(struct scan_control *sc) +static struct mem_cgroup *cgroup_reclaim(struct scan_control *sc) { - return false; + return NULL; } static bool writeback_throttling_sane(struct scan_control *sc) @@ -984,7 +986,7 @@ static enum page_references page_check_references(struct page *page, int referenced_ptes, referenced_page; unsigned long vm_flags; - referenced_ptes = page_referenced(page, 1, sc->target_mem_cgroup, + referenced_ptes = page_referenced(page, 1, cgroup_reclaim(sc), &vm_flags); referenced_page = TestClearPageReferenced(page); @@ -1422,7 +1424,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, count_vm_event(PGLAZYFREED); count_memcg_page_event(page, PGLAZYFREED); } else if (!mapping || !__remove_mapping(mapping, page, true, - sc->target_mem_cgroup)) + cgroup_reclaim(sc))) goto keep_locked; unlock_page(page); @@ -1907,6 +1909,7 @@ static int current_may_throttle(void) enum vm_event_item item; struct pglist_data *pgdat = lruvec_pgdat(lruvec); struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; + struct mem_cgroup *target_memcg = cgroup_reclaim(sc); bool stalled = false; while (unlikely(too_many_isolated(pgdat, file, sc))) { @@ -1933,7 +1936,7 @@ static int current_may_throttle(void) reclaim_stat->recent_scanned[file] += nr_taken; item = current_is_kswapd() ? PGSCAN_KSWAPD : PGSCAN_DIRECT; - if (!cgroup_reclaim(sc)) + if (!target_memcg) __count_vm_events(item, nr_scanned); __count_memcg_events(lruvec_memcg(lruvec), item, nr_scanned); spin_unlock_irq(&pgdat->lru_lock); @@ -1947,7 +1950,7 @@ static int current_may_throttle(void) spin_lock_irq(&pgdat->lru_lock); item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; - if (!cgroup_reclaim(sc)) + if (!target_memcg) __count_vm_events(item, nr_reclaimed); __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); reclaim_stat->recent_rotated[0] += stat.nr_activate[0]; @@ -2041,7 +2044,7 @@ static void shrink_active_list(unsigned long nr_to_scan, } } - if (page_referenced(page, 0, sc->target_mem_cgroup, + if (page_referenced(page, 0, cgroup_reclaim(sc), &vm_flags)) { nr_rotated += hpage_nr_pages(page); /* @@ -2625,7 +2628,7 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat, static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) { - struct mem_cgroup *target_memcg = sc->target_mem_cgroup; + struct mem_cgroup *target_memcg = cgroup_reclaim(sc); struct mem_cgroup *memcg; memcg = mem_cgroup_iter(target_memcg, NULL, NULL); @@ -2686,10 +2689,11 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) struct reclaim_state *reclaim_state = current->reclaim_state; unsigned long nr_reclaimed, nr_scanned; struct lruvec *target_lruvec; + struct mem_cgroup *target_memcg = cgroup_reclaim(sc); bool reclaimable = false; unsigned long file; - target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); + target_lruvec = mem_cgroup_lruvec(target_memcg, pgdat); again: memset(&sc->nr, 0, sizeof(sc->nr)); @@ -2744,7 +2748,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) * thrashing file LRU becomes infinitely more attractive than * anon pages. Try to detect this based on file LRU size. */ - if (!cgroup_reclaim(sc)) { + if (!target_memcg) { unsigned long total_high_wmark = 0; unsigned long free, anon; int z; @@ -2782,7 +2786,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) } /* Record the subtree's reclaim efficiency */ - vmpressure(sc->gfp_mask, sc->target_mem_cgroup, true, + vmpressure(sc->gfp_mask, target_memcg, true, sc->nr_scanned - nr_scanned, sc->nr_reclaimed - nr_reclaimed); @@ -2833,7 +2837,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) * stalling in wait_iff_congested(). */ if ((current_is_kswapd() || - (cgroup_reclaim(sc) && writeback_throttling_sane(sc))) && + (target_memcg && writeback_throttling_sane(sc))) && sc->nr.dirty && sc->nr.dirty == sc->nr.congested) set_bit(LRUVEC_CONGESTED, &target_lruvec->flags); @@ -3020,14 +3024,15 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, pg_data_t *last_pgdat; struct zoneref *z; struct zone *zone; + struct mem_cgroup *target_memcg = cgroup_reclaim(sc); retry: delayacct_freepages_start(); - if (!cgroup_reclaim(sc)) + if (!target_memcg) __count_zid_vm_events(ALLOCSTALL, sc->reclaim_idx, 1); do { - vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup, + vmpressure_prio(sc->gfp_mask, target_memcg, sc->priority); sc->nr_scanned = 0; shrink_zones(zonelist, sc); @@ -3053,12 +3058,12 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, continue; last_pgdat = zone->zone_pgdat; - snapshot_refaults(sc->target_mem_cgroup, zone->zone_pgdat); + snapshot_refaults(target_memcg, zone->zone_pgdat); - if (cgroup_reclaim(sc)) { + if (target_memcg) { struct lruvec *lruvec; - lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, + lruvec = mem_cgroup_lruvec(target_memcg, zone->zone_pgdat); clear_bit(LRUVEC_CONGESTED, &lruvec->flags); }