From patchwork Mon Jul 13 11:42:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Down X-Patchwork-Id: 11659539 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 06C8E13B4 for ; Mon, 13 Jul 2020 11:42:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B9BEB20738 for ; Mon, 13 Jul 2020 11:42:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chrisdown.name header.i=@chrisdown.name header.b="e1FeX0GR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B9BEB20738 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chrisdown.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 102CC8D0003; Mon, 13 Jul 2020 07:42:38 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0B1238D0001; Mon, 13 Jul 2020 07:42:38 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE1578D0003; Mon, 13 Jul 2020 07:42:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id DA6328D0001 for ; Mon, 13 Jul 2020 07:42:37 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 99834181AEF00 for ; Mon, 13 Jul 2020 11:42:37 +0000 (UTC) X-FDA: 77032865154.04.power26_3c0e88826ee8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 6B7028005A60 for ; Mon, 13 Jul 2020 11:42:37 +0000 (UTC) X-Spam-Summary: 1,0,0,19f43719017ced47,d41d8cd98f00b204,chris@chrisdown.name,,RULES_HIT:2:41:355:379:800:960:966:973:988:989:1260:1277:1312:1313:1314:1345:1359:1431:1437:1516:1518:1519:1535:1593:1594:1595:1596:1605:1606:1730:1747:1777:1792:1969:2195:2196:2199:2200:2393:2559:2562:2898:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4119:4321:4385:5007:6119:6261:6299:6653:6671:7903:8603:10004:11026:11473:11658:11914:12043:12291:12295:12296:12297:12438:12517:12519:12555:12683:12740:12895:12986:13007:13161:13184:13229:13255:13439:13869:13895:14096:14097:14394:21063:21080:21324:21444:21450:21451:21627:21740:21795:21966:21990:22013:30005:30034:30051:30054:30066:30070,0,RBL:209.85.218.68:@chrisdown.name:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yf7dd3j1pb459entyr63rmpuiheopi45q5ccu937k6iu7jgh3n9mwwwqdagea.w87a61sdb5ujidwh451mgyhtebhxduutic869o1egirkguwo8kkb8tjy5tfn4g5.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCa che:0,MS X-HE-Tag: power26_3c0e88826ee8 X-Filterd-Recvd-Size: 8242 Received: from mail-ej1-f68.google.com (mail-ej1-f68.google.com [209.85.218.68]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jul 2020 11:42:36 +0000 (UTC) Received: by mail-ej1-f68.google.com with SMTP id dr13so16683818ejc.3 for ; Mon, 13 Jul 2020 04:42:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chrisdown.name; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=asIUMaHIscmVUOt9O3PjjLWYy2UNKC6B4ByL+UWtDFY=; b=e1FeX0GR+KD623Ybe45d+HmtHhrVBq/5lHLV88DeXVo5dTMNF7JFmFa8IE/NcRmlIM kW3MDiMLjPHtp+vFl/UhNCB7LrIgkpwbXdS7xi3bcM5jqM5WCnz/tfSLQFDQeb0Awkg6 FF7K4opZLmhkbQILg412dwpU2CrRZiOpa9ceg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=asIUMaHIscmVUOt9O3PjjLWYy2UNKC6B4ByL+UWtDFY=; b=sWrloTztOaXw+Z8TpQ/6V2cCS19UCGF5rCz35CW0fwGNXr7ATSk5xCsS8UN1qmF1JG 3MkpgpibvZUy3s95iw7LNkFqtgwd1tv+hJwJSqC9LAEEGr5gc89USxAA8rLCS1xMCRm3 oS5Z3AvOVRXjgtn70fjQniJWVc8RD+3NVSVr/UITCNjSXAuq+WkY1Kw9zRPCm5/1OcML bTkop0jYXROc2X8IyxubjICahxvwcSsrsJkZ/p3BlcYcrD+lYdPRdcbsVv8dalyMTxUl Bz3B49UhLkluPVgEXJ/BFuAFQ5FNOotLoIE1AoBABjoIfbr2vlBIovD/77ZNJ33MuBCG 3xFw== X-Gm-Message-State: AOAM532RGgL/x202qHJEX5BMOxpJT5UcSrHwXFuYhywz9bI1lRZKcinV +CukUHDDk0ZcOzrbiRQGXP8rvQ== X-Google-Smtp-Source: ABdhPJyEII+P/2ZM/6AJWX/HZTCj9WuOpxMw0/9G1G/WCLiRXlRknK18NtlPg7X4YJGjehL2raIBIA== X-Received: by 2002:a17:906:27c9:: with SMTP id k9mr71713920ejc.74.1594640555870; Mon, 13 Jul 2020 04:42:35 -0700 (PDT) Received: from localhost ([2620:10d:c093:400::5:ef88]) by smtp.gmail.com with ESMTPSA id bs18sm11363158edb.38.2020.07.13.04.42.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Jul 2020 04:42:35 -0700 (PDT) Date: Mon, 13 Jul 2020 12:42:35 +0100 From: Chris Down To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v2 1/2] mm, memcg: reclaim more aggressively before high allocator throttling Message-ID: References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.14.5 (2020-06-23) X-Rspamd-Queue-Id: 6B7028005A60 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In Facebook production, we've seen cases where cgroups have been put into allocator throttling even when they appear to have a lot of slack file caches which should be trivially reclaimable. Looking more closely, the problem is that we only try a single cgroup reclaim walk for each return to usermode before calculating whether or not we should throttle. This single attempt doesn't produce enough pressure to shrink for cgroups with a rapidly growing amount of file caches prior to entering allocator throttling. As an example, we see that threads in an affected cgroup are stuck in allocator throttling: # for i in $(cat cgroup.threads); do > grep over_high "/proc/$i/stack" > done [<0>] mem_cgroup_handle_over_high+0x10b/0x150 [<0>] mem_cgroup_handle_over_high+0x10b/0x150 [<0>] mem_cgroup_handle_over_high+0x10b/0x150 ...however, there is no I/O pressure reported by PSI, despite a lot of slack file pages: # cat memory.pressure some avg10=78.50 avg60=84.99 avg300=84.53 total=5702440903 full avg10=78.50 avg60=84.99 avg300=84.53 total=5702116959 # cat io.pressure some avg10=0.00 avg60=0.00 avg300=0.00 total=78051391 full avg10=0.00 avg60=0.00 avg300=0.00 total=78049640 # grep _file memory.stat inactive_file 1370939392 active_file 661635072 This patch changes the behaviour to retry reclaim either until the current task goes below the 10ms grace period, or we are making no reclaim progress at all. In the latter case, we enter reclaim throttling as before. To a user, there's no intuitive reason for the reclaim behaviour to differ from hitting memory.high as part of a new allocation, as opposed to hitting memory.high because someone lowered its value. As such this also brings an added benefit: it unifies the reclaim behaviour between the two. There's precedent for this behaviour: we already do reclaim retries when writing to memory.{high,max}, in max reclaim, and in the page allocator itself. Signed-off-by: Chris Down Cc: Andrew Morton Cc: Johannes Weiner Cc: Tejun Heo Cc: Michal Hocko Reviewed-by: Shakeel Butt Acked-by: Johannes Weiner --- mm/memcontrol.c | 42 +++++++++++++++++++++++++++++++++++++----- 1 file changed, 37 insertions(+), 5 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 0145a77aa074..d4b0d8af3747 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -73,6 +73,7 @@ EXPORT_SYMBOL(memory_cgrp_subsys); struct mem_cgroup *root_mem_cgroup __read_mostly; +/* The number of times we should retry reclaim failures before giving up. */ #define MEM_CGROUP_RECLAIM_RETRIES 5 /* Socket memory accounting disabled? */ @@ -2365,18 +2366,23 @@ static int memcg_hotplug_cpu_dead(unsigned int cpu) return 0; } -static void reclaim_high(struct mem_cgroup *memcg, - unsigned int nr_pages, - gfp_t gfp_mask) +static unsigned long reclaim_high(struct mem_cgroup *memcg, + unsigned int nr_pages, + gfp_t gfp_mask) { + unsigned long nr_reclaimed = 0; + do { if (page_counter_read(&memcg->memory) <= READ_ONCE(memcg->memory.high)) continue; memcg_memory_event(memcg, MEMCG_HIGH); - try_to_free_mem_cgroup_pages(memcg, nr_pages, gfp_mask, true); + nr_reclaimed += try_to_free_mem_cgroup_pages(memcg, nr_pages, + gfp_mask, true); } while ((memcg = parent_mem_cgroup(memcg)) && !mem_cgroup_is_root(memcg)); + + return nr_reclaimed; } static void high_work_func(struct work_struct *work) @@ -2532,16 +2538,32 @@ void mem_cgroup_handle_over_high(void) { unsigned long penalty_jiffies; unsigned long pflags; + unsigned long nr_reclaimed; unsigned int nr_pages = current->memcg_nr_pages_over_high; + int nr_retries = MEM_CGROUP_RECLAIM_RETRIES; struct mem_cgroup *memcg; + bool in_retry = false; if (likely(!nr_pages)) return; memcg = get_mem_cgroup_from_mm(current->mm); - reclaim_high(memcg, nr_pages, GFP_KERNEL); current->memcg_nr_pages_over_high = 0; +retry_reclaim: + /* + * The allocating task should reclaim at least the batch size, but for + * subsequent retries we only want to do what's necessary to prevent oom + * or breaching resource isolation. + * + * This is distinct from memory.max or page allocator behaviour because + * memory.high is currently batched, whereas memory.max and the page + * allocator run every time an allocation is made. + */ + nr_reclaimed = reclaim_high(memcg, + in_retry ? SWAP_CLUSTER_MAX : nr_pages, + GFP_KERNEL); + /* * memory.high is breached and reclaim is unable to keep up. Throttle * allocators proactively to slow down excessive growth. @@ -2568,6 +2590,16 @@ void mem_cgroup_handle_over_high(void) if (penalty_jiffies <= HZ / 100) goto out; + /* + * If reclaim is making forward progress but we're still over + * memory.high, we want to encourage that rather than doing allocator + * throttling. + */ + if (nr_reclaimed || nr_retries--) { + in_retry = true; + goto retry_reclaim; + } + /* * If we exit early, we're guaranteed to die (since * schedule_timeout_killable sets TASK_KILLABLE). This means we don't From patchwork Mon Jul 13 11:42:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Down X-Patchwork-Id: 11659541 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 472F613B4 for ; Mon, 13 Jul 2020 11:42:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0A0AB20758 for ; Mon, 13 Jul 2020 11:42:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chrisdown.name header.i=@chrisdown.name header.b="XPk/m3az" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A0AB20758 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chrisdown.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 56CEB8D0005; Mon, 13 Jul 2020 07:42:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 51D0F8D0001; Mon, 13 Jul 2020 07:42:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40C6B8D0005; Mon, 13 Jul 2020 07:42:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0170.hostedemail.com [216.40.44.170]) by kanga.kvack.org (Postfix) with ESMTP id 2B3168D0001 for ; Mon, 13 Jul 2020 07:42:51 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E39252C89 for ; Mon, 13 Jul 2020 11:42:50 +0000 (UTC) X-FDA: 77032865700.23.ants44_040f99826ee8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id B3A6237606 for ; Mon, 13 Jul 2020 11:42:50 +0000 (UTC) X-Spam-Summary: 1,0,0,c1f4649e782ebdac,d41d8cd98f00b204,chris@chrisdown.name,,RULES_HIT:41:355:379:800:960:966:973:988:989:1260:1277:1312:1313:1314:1345:1359:1431:1437:1516:1518:1519:1535:1543:1593:1594:1595:1596:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3872:4117:4250:4321:4385:4605:5007:6261:6653:8660:9168:9592:10004:10400:10450:10455:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13148:13230:13439:13895:14096:14097:14181:14394:14721:19904:19999:21080:21433:21444:21450:21451:21627:21740:21939:21966:30034:30054,0,RBL:209.85.208.68:@chrisdown.name:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yra5xgs7ptyx6rw4prbn3qpxwzxocic7ijnzgzx4a55f35xe1yr6cnpq168ow.5nfkx1kc1u67czibcy7fdc8cxnf3zo73iucpwwm1wbhemqdaha6dbfcdpym44t3.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0: 0:0,LFti X-HE-Tag: ants44_040f99826ee8 X-Filterd-Recvd-Size: 6494 Received: from mail-ed1-f68.google.com (mail-ed1-f68.google.com [209.85.208.68]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jul 2020 11:42:50 +0000 (UTC) Received: by mail-ed1-f68.google.com with SMTP id h28so13334304edz.0 for ; Mon, 13 Jul 2020 04:42:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chrisdown.name; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=UNFqZOOI2Y2+hjmMi9f9WyAk2VzDnQXCrSncEbD+OF8=; b=XPk/m3az111JebUG3++E4Wc34pLE+nXcrm2myLzSUtXpP/DQgp3Zg3cR1US1pAiaSG 0mGCh9yrvAoIixSWMzggOTBPUTnXB+1q7SwVYb61N8Hns40cEjd/WdPnGEmftaVWuz+Z /JdxhvRkhZqRwlwtNRjOFAMtHiRhDuGB9MNfU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=UNFqZOOI2Y2+hjmMi9f9WyAk2VzDnQXCrSncEbD+OF8=; b=jT9KbgUKlP6cVSKlRm0YAlwj/cQLezJZoeWE+lw9xksiCc7+mySQL475GTaEjRevEA KDK+zz4rOZ9LQXhVQNt66/7+xEoNztRkV7LnNQkqGoI720l2N0aRDoKwptBlUukRYScx 7el+bIb2aBMI+5yN8roVEj7ks8VMJzP7808Zr8F3T1D49NIfEK9eo+4M5vVHKwkWdtef CarCzrq9OUatvdccjsAeXoYSeaT4ERh3DP0CTgmlEno+G+g2gY0BcrgHDBxz5NVIZOVF zjxr/RKiF9v609pWIi27ThHJqji0xHJRlebpWAGicbByLqfHEnTVynVd/lX/8KU37+9M 7TZw== X-Gm-Message-State: AOAM533J2BQIOayBVzvhgNXi10n54/quYgKbrb/usuzCYBjcFVlLbpmW dJ8il0jypmdMG5fJXB4j/gA90w== X-Google-Smtp-Source: ABdhPJyBWP+SbDDs651fzGxZ9kryEASz/UerrssnUJ7RuqAc2LHY/8f18CH1WZPS/mJY1MOX+dZAcw== X-Received: by 2002:a05:6402:1ac4:: with SMTP id ba4mr87336086edb.60.1594640569319; Mon, 13 Jul 2020 04:42:49 -0700 (PDT) Received: from localhost ([2620:10d:c093:400::5:ef88]) by smtp.gmail.com with ESMTPSA id v24sm11220349eds.71.2020.07.13.04.42.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Jul 2020 04:42:49 -0700 (PDT) Date: Mon, 13 Jul 2020 12:42:48 +0100 From: Chris Down To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v2 2/2] mm, memcg: unify reclaim retry limits with page allocator Message-ID: References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.14.5 (2020-06-23) X-Rspamd-Queue-Id: B3A6237606 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reclaim retries have been set to 5 since the beginning of time in commit 66e1707bc346 ("Memory controller: add per cgroup LRU and reclaim"). However, we now have a generally agreed-upon standard for page reclaim: MAX_RECLAIM_RETRIES (currently 16), added many years later in commit 0a0337e0d1d1 ("mm, oom: rework oom detection"). In the absence of a compelling reason to declare an OOM earlier in memcg context than page allocator context, it seems reasonable to supplant MEM_CGROUP_RECLAIM_RETRIES with MAX_RECLAIM_RETRIES, making the page allocator and memcg internals more similar in semantics when reclaim fails to produce results, avoiding premature OOMs or throttling. Signed-off-by: Chris Down Cc: Andrew Morton Cc: Johannes Weiner Cc: Michal Hocko Acked-by: Michal Hocko Reviewed-by: Shakeel Butt --- mm/memcontrol.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d4b0d8af3747..672123875494 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -73,9 +73,6 @@ EXPORT_SYMBOL(memory_cgrp_subsys); struct mem_cgroup *root_mem_cgroup __read_mostly; -/* The number of times we should retry reclaim failures before giving up. */ -#define MEM_CGROUP_RECLAIM_RETRIES 5 - /* Socket memory accounting disabled? */ static bool cgroup_memory_nosocket; @@ -2540,7 +2537,7 @@ void mem_cgroup_handle_over_high(void) unsigned long pflags; unsigned long nr_reclaimed; unsigned int nr_pages = current->memcg_nr_pages_over_high; - int nr_retries = MEM_CGROUP_RECLAIM_RETRIES; + int nr_retries = MAX_RECLAIM_RETRIES; struct mem_cgroup *memcg; bool in_retry = false; @@ -2617,7 +2614,7 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, unsigned int nr_pages) { unsigned int batch = max(MEMCG_CHARGE_BATCH, nr_pages); - int nr_retries = MEM_CGROUP_RECLAIM_RETRIES; + int nr_retries = MAX_RECLAIM_RETRIES; struct mem_cgroup *mem_over_limit; struct page_counter *counter; unsigned long nr_reclaimed; @@ -2736,7 +2733,7 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, get_order(nr_pages * PAGE_SIZE)); switch (oom_status) { case OOM_SUCCESS: - nr_retries = MEM_CGROUP_RECLAIM_RETRIES; + nr_retries = MAX_RECLAIM_RETRIES; goto retry; case OOM_FAILED: goto force; @@ -3396,7 +3393,7 @@ static inline bool memcg_has_children(struct mem_cgroup *memcg) */ static int mem_cgroup_force_empty(struct mem_cgroup *memcg) { - int nr_retries = MEM_CGROUP_RECLAIM_RETRIES; + int nr_retries = MAX_RECLAIM_RETRIES; /* we call try-to-free pages for make this cgroup empty */ lru_add_drain_all(); @@ -6225,7 +6222,7 @@ static ssize_t memory_high_write(struct kernfs_open_file *of, char *buf, size_t nbytes, loff_t off) { struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); - unsigned int nr_retries = MEM_CGROUP_RECLAIM_RETRIES; + unsigned int nr_retries = MAX_RECLAIM_RETRIES; bool drained = false; unsigned long high; int err; @@ -6273,7 +6270,7 @@ static ssize_t memory_max_write(struct kernfs_open_file *of, char *buf, size_t nbytes, loff_t off) { struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); - unsigned int nr_reclaims = MEM_CGROUP_RECLAIM_RETRIES; + unsigned int nr_reclaims = MAX_RECLAIM_RETRIES; bool drained = false; unsigned long max; int err;