From patchwork Fri May 8 18:30:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 11537379 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 78480912 for ; Fri, 8 May 2020 18:32:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 389612192A for ; Fri, 8 May 2020 18:32:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="pxoqGiGX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 389612192A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E504A8E0006; Fri, 8 May 2020 14:32:20 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E0156900005; Fri, 8 May 2020 14:32:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF0A68E0007; Fri, 8 May 2020 14:32:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0042.hostedemail.com [216.40.44.42]) by kanga.kvack.org (Postfix) with ESMTP id B0F9B8E0006 for ; Fri, 8 May 2020 14:32:20 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 741118248047 for ; Fri, 8 May 2020 18:32:20 +0000 (UTC) X-FDA: 76794396840.19.hands79_6bb2df324a45a X-Spam-Summary: 2,0,0,fe01a68c63afcc69,d41d8cd98f00b204,hannes@cmpxchg.org,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3871:3872:3874:4117:4250:4385:4605:5007:6261:6653:6742:8660:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:13148:13161:13229:13230:13894:14181:14394:14721:14819:21060:21080:21433:21444:21451:21627:21990:30054:30070,0,RBL:209.85.160.194:@cmpxchg.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: hands79_6bb2df324a45a X-Filterd-Recvd-Size: 6755 Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 May 2020 18:32:19 +0000 (UTC) Received: by mail-qt1-f194.google.com with SMTP id 4so2147280qtb.4 for ; Fri, 08 May 2020 11:32:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tJumjdWPL8ImmNUzRG9WEC1TfJwykzONtYCeiyJV9G4=; b=pxoqGiGXEuav96/KW40PHN/56mIPcv+pG8bFWXckn7KWmaroqEIKnTBtfp2ivpknd5 eUpMr7sVbK7wX5G7uKSEf6hU2QLAr9AEmkT21qOLTwpAVEFYgmqlYrmbipLmLqyW+aEg ZhH0rgKtQ7J9/dzwlS7Saf/IslIKpJyNjugFrVQaBFZAetBkf2SmcUJpvZMK224rS+sI 7ZWJEzgwBB3geO6LRoXV6e6KGSiF+SXl75wg2aniVze4+KdN6IR370Kh08YZ+2h72JEY /nnZCbfLQYr3kMrU7gIvvm8tBrJnemtlzyBLO7s+HDGXMNiJFvXkQqDGmd7mhPa7/s4o y7TA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tJumjdWPL8ImmNUzRG9WEC1TfJwykzONtYCeiyJV9G4=; b=hNbSzJRLSGq8h1zUmODWdqjmPbty6o8BE3raXp4SNxt/dvVQ4isS88OdZ3yOVMkuD4 qDeagTO28/HScXf444lqO5bVHezoDTKyVT4Iqz6b8zNHhvfmWQq6tjg9fybsAL7JKrMb Z3QY1JgXJ2K8TTe++H0hd5QzDJdCE+dDyyQMyy36TtPjzYBwCEvVTgbaviX3deI6EvsD 0TB9u2knm0N95cORfIF6Itvv9MyJkp0Oq3B7GzsRMDbp/8rv9dIxJQ3IHGxzteUbMoFK /ZP/f5gBIodUyUHbrCXqUfpjLzCAdjcx7bi9ntET/g3QyMUWTy8I+nFvULAlte0bv5Gd QkaA== X-Gm-Message-State: AGi0PuZWlVF2dF5qnx30oFiconSIwNVd9uXsChXdepKWSJ5tn5/6jMGe T0/9iyA0sbUhDzisfABB1BbrXQ== X-Google-Smtp-Source: APiQypJfBtzB9Izp4VoWSxD1LvsBcJL01Juo/72A8UeQeymaN8ofP40ppKn9jTkqrlZ3wkzGb0+wKA== X-Received: by 2002:aed:3009:: with SMTP id 9mr4480935qte.191.1588962739111; Fri, 08 May 2020 11:32:19 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:2627]) by smtp.gmail.com with ESMTPSA id q207sm1700107qka.13.2020.05.08.11.32.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 May 2020 11:32:18 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Alex Shi , Joonsoo Kim , Shakeel Butt , Hugh Dickins , Michal Hocko , "Kirill A. Shutemov" , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 04/19] mm: memcontrol: move out cgroup swaprate throttling Date: Fri, 8 May 2020 14:30:51 -0400 Message-Id: <20200508183105.225460-5-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200508183105.225460-1-hannes@cmpxchg.org> References: <20200508183105.225460-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The cgroup swaprate throttling is about matching new anon allocations to the rate of available IO when that is being throttled. It's the io controller hooking into the VM, rather than a memory controller thing. Rename mem_cgroup_throttle_swaprate() to cgroup_throttle_swaprate(), and drop the @memcg argument which is only used to check whether the preceding page charge has succeeded and the fault is proceeding. We could decouple the call from mem_cgroup_try_charge() here as well, but that would cause unnecessary churn: the following patches convert all callsites to a new charge API and we'll decouple as we go along. Signed-off-by: Johannes Weiner Reviewed-by: Alex Shi Reviewed-by: Joonsoo Kim Reviewed-by: Shakeel Butt --- include/linux/swap.h | 6 ++---- mm/memcontrol.c | 5 ++--- mm/swapfile.c | 14 +++++++------- 3 files changed, 11 insertions(+), 14 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 873bf5206afb..b42fb47d8cbe 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -650,11 +650,9 @@ static inline int mem_cgroup_swappiness(struct mem_cgroup *mem) #endif #if defined(CONFIG_SWAP) && defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) -extern void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, int node, - gfp_t gfp_mask); +extern void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask); #else -static inline void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, - int node, gfp_t gfp_mask) +static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) { } #endif diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 13da46a5d8ae..8188d462d7ce 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6494,12 +6494,11 @@ int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, struct mem_cgroup **memcgp) { - struct mem_cgroup *memcg; int ret; ret = mem_cgroup_try_charge(page, mm, gfp_mask, memcgp); - memcg = *memcgp; - mem_cgroup_throttle_swaprate(memcg, page_to_nid(page), gfp_mask); + if (*memcgp) + cgroup_throttle_swaprate(page, gfp_mask); return ret; } diff --git a/mm/swapfile.c b/mm/swapfile.c index 15e5f8f290cc..ad42eac1822d 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3748,11 +3748,12 @@ static void free_swap_count_continuations(struct swap_info_struct *si) } #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP) -void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, int node, - gfp_t gfp_mask) +void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) { struct swap_info_struct *si, *next; - if (!(gfp_mask & __GFP_IO) || !memcg) + int nid = page_to_nid(page); + + if (!(gfp_mask & __GFP_IO)) return; if (!blk_cgroup_congested()) @@ -3766,11 +3767,10 @@ void mem_cgroup_throttle_swaprate(struct mem_cgroup *memcg, int node, return; spin_lock(&swap_avail_lock); - plist_for_each_entry_safe(si, next, &swap_avail_heads[node], - avail_lists[node]) { + plist_for_each_entry_safe(si, next, &swap_avail_heads[nid], + avail_lists[nid]) { if (si->bdev) { - blkcg_schedule_throttle(bdev_get_queue(si->bdev), - true); + blkcg_schedule_throttle(bdev_get_queue(si->bdev), true); break; } }