From patchwork Tue Jul 31 11:09:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhaoyang Huang X-Patchwork-Id: 10550651 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE0E1157D for ; Tue, 31 Jul 2018 11:09:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BF9C12A2FD for ; Tue, 31 Jul 2018 11:09:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B18012A3A5; Tue, 31 Jul 2018 11:09:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 18AA72A2FD for ; Tue, 31 Jul 2018 11:09:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 351BB6B0003; Tue, 31 Jul 2018 07:09:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 300316B0005; Tue, 31 Jul 2018 07:09:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 217F76B000A; Tue, 31 Jul 2018 07:09:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f69.google.com (mail-pl0-f69.google.com [209.85.160.69]) by kanga.kvack.org (Postfix) with ESMTP id D60D96B0003 for ; Tue, 31 Jul 2018 07:09:40 -0400 (EDT) Received: by mail-pl0-f69.google.com with SMTP id 90-v6so11040240pla.18 for ; Tue, 31 Jul 2018 04:09:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:subject:date:message-id; bh=2WvnA6eI9cUHIsdEP7c7f7BrbwUfvvtbYh8dMg2a8tM=; b=I+e5gk2rCnCG7AsnZZVT+URICnk9hJfMVmrGEirgS4xfgyoz1BMx8Q1f2jZQ8vNu9R uC+AHdAaWI8xBNrrDpbnE2KIO48y2p6jKPyEW38XYWCgS0Kx6UI11vw9W+v3phPaFAMn PblajVX5oNOH79eSQPKDtRPyQlz6HRmYOoTvnmy5dkeA9qBIQU5xxdl1Pb4miv3CLS0T HYFmiaJYjGKglcBtigFSv1fOJ2P8npk6B53nTa0Ll56IqwjFZ7QQQKN6ZYRzJMNdATy7 PzljWyqqNxYbsAnud/v4Cj2AfqRTFh9zfVR0o7AqMqfawIGNMhYAkAwlPpEbjN/sDd06 LWIg== X-Gm-Message-State: AOUpUlGF732Senjj4dFO+Kf5TFaImKfNMNSSupmaEPFNSH5XwbK/SBmo 0U46BAEiacN4D4vbhe04GYmeTP3rqbQswVZfFqEt27veww22UJQ2VL25IxdL9eI/AoOdpr+Gv4c mJ/bObgHg78XTivgcgLKCL4Zdidbwn8ttOcXVYUy9n6t0LP5FckRgjgIqUX5WwQsisUk1yBTTBQ wICZy7sFB3M4dheOE0y22gAbOG9kqNB8Mr06x6QsCv/9na3ltmqu638WuDVa0cBRESLnCPOyPfp eyTv7bQX39m3xXytl595LTRlB+qFt4CL6hqYldNArcxOt3yLPEKVTbr9MzqwfCW5TnOAh+JJ/DN pu2Si8iNymy88A/2yUAlk+vejlP05kMZBay6/z7OdsJ1c4Mi7x870Uf4H+hsK2PImFji8wOXOJE Q X-Received: by 2002:a63:cd02:: with SMTP id i2-v6mr19626188pgg.93.1533035380480; Tue, 31 Jul 2018 04:09:40 -0700 (PDT) X-Received: by 2002:a63:cd02:: with SMTP id i2-v6mr19626095pgg.93.1533035378819; Tue, 31 Jul 2018 04:09:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533035378; cv=none; d=google.com; s=arc-20160816; b=pakoyUldlCO4ADzujMKWYZ9D29LsqEMVQWT55GrMijSTLv9XsFvYyD34UfCD5cUYQy ONeNdzLNl3XbwxvTn2KJL6VPmGpQNr9CLSX0nMA6J4wbJ7TdOxOnqSg423H5MNfvrwGv 92JPY3ZH771V7/KDeadAacpEEfHbWAdifjHt9jlEJ5xPXnrN36m7ICz+l5yjoy2Gg7Iz fT9c58b6IZdjFXA6ZCjMaDxStyEsILCtsbU4XQnS3mkDnld96Y9G4vAqPPuq5oQWRvZ5 FKsCygTsORctALYfxP0aBbvbq3mEyIIOouJElfsLEQKWXIpv21qHXupv90DJ7Q13oS4/ dIew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:to:from:dkim-signature :arc-authentication-results; bh=2WvnA6eI9cUHIsdEP7c7f7BrbwUfvvtbYh8dMg2a8tM=; b=qcDs7Z0rPBKWD9UtByno4eJgTbP7c3RmEBtUU2P6KEijC8+8JAPVqophTe/JoATvAT osF9q+NbO9gSPpq/lMqETIXA8w0y01tJnkg7J2tPZx1cY0+bLZSQQZSi/GVknzvlH5d1 7AkItoHwLyoNqwjPPIvwG5digQDVZ+5mnOGpMIT3EQRNIHzusaO1eZvg2QJBFfyNFCpO rl29I/ODgP6oLKqc137gCpoEcvN9oFoXzpeycxPBnmOvaK4TU8EHCo8IrMgMs1owOR2H G2TDDxnScPdXXuJK2crs+Xul8ax44t/iclErymlUTCyoORxqaF4ghWvW8tKRQriO5Y5q E31Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ihnk5AOp; spf=pass (google.com: domain of huangzhaoyang@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id n15-v6sor2103985pfg.28.2018.07.31.04.09.38 for (Google Transport Security); Tue, 31 Jul 2018 04:09:38 -0700 (PDT) Received-SPF: pass (google.com: domain of huangzhaoyang@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ihnk5AOp; spf=pass (google.com: domain of huangzhaoyang@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id; bh=2WvnA6eI9cUHIsdEP7c7f7BrbwUfvvtbYh8dMg2a8tM=; b=ihnk5AOpjjIVz/umuEbrSJ9sj9cpJ2/G0Gl6M0A5YxIsqsQEGhYGqVj41W23e3d1M+ omtSBfdOSDwOoAvKS/aySkqfb6TpRp2Q8rNwSW5DAsUe6hiXMlTreoHiKOXG4C2HyPGO Mt8G5qo8qimIvmp3+VwYg7oyvE0YCKgI+cn1/xKm9GOtop5BLJV7CXKhg4vnHKD+Utfy kXwEhO/lFTRRsNpJTdElTXOsvSc8pgJt1tR1+gS+LTiN3SQdOnXrlmbpXqm+srZ0yAjH SMnUiU2HE1X77kXv0Xs7TkicuOOFmISZRt6FmwcM4Uu+dTkeCmJVb5COpgA79dkxUxFr RNKQ== X-Google-Smtp-Source: AAOMgpe0mN0r5Z3yJ5qwue6LJbsohulVzj60oOlaUWMm6IIez03NrxXK6eudXNe1I1T2qxpO7AOcpA== X-Received: by 2002:a62:c8c2:: with SMTP id i63-v6mr21847671pfk.73.1533035378480; Tue, 31 Jul 2018 04:09:38 -0700 (PDT) Received: from bj03382pcu.spreadtrum.com ([117.18.48.102]) by smtp.gmail.com with ESMTPSA id z10-v6sm19147154pfh.83.2018.07.31.04.09.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 31 Jul 2018 04:09:37 -0700 (PDT) From: Zhaoyang Huang X-Google-Original-From: Zhaoyang Huang To: Steven Rostedt , Ingo Molnar , Johannes Weiner , Michal Hocko , Vladimir Davydov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-patch-test@lists.linaro.org Subject: [PATCH v2] mm: terminate the reclaim early when direct reclaiming Date: Tue, 31 Jul 2018 19:09:28 +0800 Message-Id: <1533035368-30911-1-git-send-email-zhaoyang.huang@spreadtrum.com> X-Mailer: git-send-email 1.7.9.5 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This patch try to let the direct reclaim finish earlier than it used to be. The problem comes from We observing that the direct reclaim took a long time to finish when memcg is enabled. By debugging, we find that the reason is the softlimit is too low to meet the loop end criteria. So we add two barriers to judge if it has reclaimed enough memory as same criteria as it is in shrink_lruvec: 1. for each memcg softlimit reclaim. 2. before starting the global reclaim in shrink_zone. Signed-off-by: Zhaoyang Huang --- include/linux/memcontrol.h | 3 ++- mm/memcontrol.c | 3 +++ mm/vmscan.c | 38 +++++++++++++++++++++++++++++++++++++- 3 files changed, 42 insertions(+), 2 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 6c6fb11..a7e82c7 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -325,7 +325,8 @@ void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg, void mem_cgroup_uncharge_list(struct list_head *page_list); void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); - +bool direct_reclaim_reach_watermark(pg_data_t *pgdat, unsigned long nr_reclaimed, + unsigned long nr_scanned, gfp_t gfp_mask, int order); static struct mem_cgroup_per_node * mem_cgroup_nodeinfo(struct mem_cgroup *memcg, int nid) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 8c0280b..e4efd46 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2577,6 +2577,9 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, (next_mz == NULL || loop > MEM_CGROUP_MAX_SOFT_LIMIT_RECLAIM_LOOPS)) break; + if (direct_reclaim_reach_watermark(pgdat, nr_reclaimed, + *total_scanned, gfp_mask, order)) + break; } while (!nr_reclaimed); if (next_mz) css_put(&next_mz->memcg->css); diff --git a/mm/vmscan.c b/mm/vmscan.c index 03822f8..19503f3 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2518,6 +2518,34 @@ static bool pgdat_memcg_congested(pg_data_t *pgdat, struct mem_cgroup *memcg) (memcg && memcg_congested(pgdat, memcg)); } +bool direct_reclaim_reach_watermark(pg_data_t *pgdat, unsigned long nr_reclaimed, + unsigned long nr_scanned, gfp_t gfp_mask, + int order) +{ + struct scan_control sc = { + .gfp_mask = gfp_mask, + .order = order, + .priority = DEF_PRIORITY, + .nr_reclaimed = nr_reclaimed, + .nr_scanned = nr_scanned, + }; + if (!current_is_kswapd()) + return false; + if (!IS_ENABLED(CONFIG_COMPACTION)) + return false; + /* + * In fact, we add 1 to nr_reclaimed and nr_scanned to let should_continue_reclaim + * NOT return by finding they are zero, which means compaction_suitable() + * takes effect here to judge if we have reclaimed enough pages for passing + * the watermark and no necessary to check other memcg anymore. + */ + if (!should_continue_reclaim(pgdat, + sc.nr_reclaimed + 1, sc.nr_scanned + 1, &sc)) + return true; + return false; +} +EXPORT_SYMBOL(direct_reclaim_reach_watermark); + static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) { struct reclaim_state *reclaim_state = current->reclaim_state; @@ -2802,7 +2830,15 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc) sc->nr_scanned += nr_soft_scanned; /* need some check for avoid more shrink_zone() */ } - + /* + * we maybe have stolen enough pages from soft limit reclaim, so we return + * back if we are direct reclaim + */ + if (direct_reclaim_reach_watermark(zone->zone_pgdat, sc->nr_reclaimed, + sc->nr_scanned, sc->gfp_mask, sc->order)) { + sc->gfp_mask = orig_mask; + return; + } /* See comment about same check for global reclaim above */ if (zone->zone_pgdat == last_pgdat) continue;