From patchwork Fri Feb 21 04:30:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sultan Alsawaf (unemployed)" X-Patchwork-Id: 11395541 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C727513A4 for ; Fri, 21 Feb 2020 04:30:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8A4182465D for ; Fri, 21 Feb 2020 04:30:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Y8PQ2M1y" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8A4182465D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kerneltoast.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D1F406B00A9; Thu, 20 Feb 2020 23:30:57 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CCF416B00AA; Thu, 20 Feb 2020 23:30:57 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBDA66B00AB; Thu, 20 Feb 2020 23:30:57 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0223.hostedemail.com [216.40.44.223]) by kanga.kvack.org (Postfix) with ESMTP id A5C886B00A9 for ; Thu, 20 Feb 2020 23:30:57 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 960E9181AEF07 for ; Fri, 21 Feb 2020 04:30:57 +0000 (UTC) X-FDA: 76512858954.02.cook09_428588ae29058 X-Spam-Summary: 2,0,0,e578bcc6d2e3a2f4,d41d8cd98f00b204,sultan@kerneltoast.com,,RULES_HIT:41:355:379:387:541:800:960:965:966:973:981:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:2899:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3872:3874:4117:4250:4321:4385:4390:4395:4605:5007:6119:6261:6653:7576:8603:9121:10004:11026:11658:11914:12043:12294:12296:12297:12438:12517:12519:12555:12679:12895:13894:13972:14096:14394:14721:21324:21444:21451:21627:21740:21987:21990:30054:30070,0,RBL:209.85.214.194:@mail-pl1-f194.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:hn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: cook09_428588ae29058 X-Filterd-Recvd-Size: 6257 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Fri, 21 Feb 2020 04:30:57 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id t6so322999plj.5 for ; Thu, 20 Feb 2020 20:30:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EttScu7b3Zcxse3Lm5LZJBxiXqhV0LDxi2fVutkTFXE=; b=Y8PQ2M1yUf7XA8Sjia46qNZT0aDDg8iNmoBP5Z1OTOccdiwbqFNOHawUoK3dRE7S8J MQUz0/81iKo++qYf2L1ChFO0TbR+OAG2cZSOcqraXo9QPpvHND6d7pslM58uYRmc/Suq zXC0BG7pI1oPTRi9byUC+XnJnfVPkzMOIV0OUeb6NIsKoLDK3SDqoNBpF9crP1gG17jj nR9kO7DqZkUu1Z90ByHvAqHF8hHLthl5pSRjlh5yaOjc3qeX9r7T7zvw/kxTfsynx06K f/War94AlcsSZskw1mWsInkPIcJ+1DOjWb0YnFY+WPan64ueQ/bN4aWGGpMyZrBHkK5t 5ITw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=EttScu7b3Zcxse3Lm5LZJBxiXqhV0LDxi2fVutkTFXE=; b=f+mobW6yIUd+eCzEEeCwVBmOyA6eVODEx1RkYpOG9lLKizXFNCRajL09fe6yjvHMA7 T5Y5SuBfhNLvMkaPZ2gvbYXQ1O3Cp1FYwmzy7vqOrF3wG6s31EA1n7/XAKSv76eEGZXg 2sZNAgq6wNGIxCqPyv0kMa+aahlBEa7yt7QdltYBaKwJukoByE42FbsfE46I43xN9GMI i3J4R/EMLexqXONIS3hXguboajcyKHvy982NGF/u3vY+I06w6n81nln37ujnjL5R9Bec 8n/UjjGSnQnEEz9+2JCowFOJSsJhWLOrboSTwSwTDpUmXeJh192XsQNaUpaic2ml7343 qwBg== X-Gm-Message-State: APjAAAWlrXwYykGv3UsH9JUie2w4npmSOwg0elvNUsvoRnoN4jo6NR2M 4wew6151K35XyHIeiilwFGs= X-Google-Smtp-Source: APXvYqzJPnbzB0wQdThyd64SZw54m+AzRRm2mj7M5/HdjDw9n+m/wYxEx1fS1FjBPKcUQEF4dLRB3g== X-Received: by 2002:a17:90a:aa83:: with SMTP id l3mr784816pjq.5.1582259455907; Thu, 20 Feb 2020 20:30:55 -0800 (PST) Received: from sultan-book.localdomain ([104.200.129.62]) by smtp.gmail.com with ESMTPSA id f18sm827488pgn.2.2020.02.20.20.30.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Feb 2020 20:30:55 -0800 (PST) From: Sultan Alsawaf X-Google-Original-From: Sultan Alsawaf To: Cc: Sultan Alsawaf , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2] mm: Stop kswapd early when nothing's waiting for it to free pages Date: Thu, 20 Feb 2020 20:30:52 -0800 Message-Id: <20200221043052.3305-1-sultan@kerneltoast.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200219182522.1960-1-sultan@kerneltoast.com> References: <20200219182522.1960-1-sultan@kerneltoast.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Sultan Alsawaf Keeping kswapd running when all the failed allocations that invoked it are satisfied incurs a high overhead due to unnecessary page eviction and writeback, as well as spurious VM pressure events to various registered shrinkers. When kswapd doesn't need to work to make an allocation succeed anymore, stop it prematurely to save resources. Signed-off-by: Sultan Alsawaf --- include/linux/mmzone.h | 1 + mm/page_alloc.c | 17 ++++++++++++++--- mm/vmscan.c | 3 ++- 3 files changed, 17 insertions(+), 4 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 462f6873905a..23861cdaab7f 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -735,6 +735,7 @@ typedef struct pglist_data { unsigned long node_spanned_pages; /* total size of physical page range, including holes */ int node_id; + atomic_t kswapd_waiters; wait_queue_head_t kswapd_wait; wait_queue_head_t pfmemalloc_wait; struct task_struct *kswapd; /* Protected by diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3c4eb750a199..923b994c38c8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4401,6 +4401,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, int no_progress_loops; unsigned int cpuset_mems_cookie; int reserve_flags; + pg_data_t *pgdat = ac->preferred_zoneref->zone->zone_pgdat; + bool woke_kswapd = false; /* * We also sanity check to catch abuse of atomic reserves being used by @@ -4434,8 +4436,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, if (!ac->preferred_zoneref->zone) goto nopage; - if (alloc_flags & ALLOC_KSWAPD) + if (alloc_flags & ALLOC_KSWAPD) { + if (!woke_kswapd) { + atomic_inc(&pgdat->kswapd_waiters); + woke_kswapd = true; + } wake_all_kswapds(order, gfp_mask, ac); + } /* * The adjusted alloc_flags might result in immediate success, so try @@ -4640,9 +4647,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, goto retry; } fail: - warn_alloc(gfp_mask, ac->nodemask, - "page allocation failure: order:%u", order); got_pg: + if (woke_kswapd) + atomic_dec(&pgdat->kswapd_waiters); + if (!page) + warn_alloc(gfp_mask, ac->nodemask, + "page allocation failure: order:%u", order); return page; } @@ -6711,6 +6721,7 @@ static void __meminit pgdat_init_internals(struct pglist_data *pgdat) pgdat_page_ext_init(pgdat); spin_lock_init(&pgdat->lru_lock); lruvec_init(&pgdat->__lruvec); + pgdat->kswapd_waiters = (atomic_t)ATOMIC_INIT(0); } static void __meminit zone_init_internals(struct zone *zone, enum zone_type idx, int nid, diff --git a/mm/vmscan.c b/mm/vmscan.c index c05eb9efec07..59d9f3dd14f6 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3694,7 +3694,8 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx) __fs_reclaim_release(); ret = try_to_freeze(); __fs_reclaim_acquire(); - if (ret || kthread_should_stop()) + if (ret || kthread_should_stop() || + !atomic_read(&pgdat->kswapd_waiters)) break; /*