From patchwork Mon Apr 20 22:11:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 11499967 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 103DA1575 for ; Mon, 20 Apr 2020 22:12:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B5CB720857 for ; Mon, 20 Apr 2020 22:12:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="lD2oKzCp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B5CB720857 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CDE2B8E0016; Mon, 20 Apr 2020 18:12:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C8F308E0003; Mon, 20 Apr 2020 18:12:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B30568E0016; Mon, 20 Apr 2020 18:12:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id 952148E0003 for ; Mon, 20 Apr 2020 18:12:14 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 62888180AD83E for ; Mon, 20 Apr 2020 22:12:14 +0000 (UTC) X-FDA: 76729632588.17.neck53_6ad42a37dcd09 X-Spam-Summary: 2,0,0,cbf7d085ce34acc3,d41d8cd98f00b204,hannes@cmpxchg.org,,RULES_HIT:1:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2196:2199:2380:2393:2553:2559:2562:2636:2907:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:4605:5007:6119:6261:6653:7903:8957:9592:10004:11026:11232:11233:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14096:14110:14394:21080:21324:21433:21444:21450:21451:21627:21966:21987:21990:30003:30012:30054:30090,0,RBL:209.85.222.194:@cmpxchg.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: neck53_6ad42a37dcd09 X-Filterd-Recvd-Size: 13991 Received: from mail-qk1-f194.google.com (mail-qk1-f194.google.com [209.85.222.194]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Mon, 20 Apr 2020 22:12:13 +0000 (UTC) Received: by mail-qk1-f194.google.com with SMTP id m67so12493616qke.12 for ; Mon, 20 Apr 2020 15:12:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PncRoalTORTzbPDipW6LC/3hBRJEv36Hcb5+zF4gMQI=; b=lD2oKzCp8iMw2/AoEsPsa7wNaiC7gPfC/xtQbGFpVKtiYZcZw1aaTPz3iutggIrJYj +F39OgqOjUV24z5rbhHJ0hRn7w+tBm0XBBT7c1Hmxu0UF5Y+Ggh+HK6Kostsdj/NyvyH EvFx5J+8al9GXzRRuJ3vDEWRZckh7TPr5J+sH+iEGksj7idRcraj57Nkv8ntPVxnw1hI Es0XUIHbguzDRlfAFCk9aHkdV2KUD9nI9sHGr9WFre3CS53I8JC2//bxVqEsp8NV4vLb erMHFO1mmEvAsoU7i168DpDvh3bNY1WtbSXX62iEASE2Lqz20vy20jkDFpc01nai5awo ERxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PncRoalTORTzbPDipW6LC/3hBRJEv36Hcb5+zF4gMQI=; b=ssB9kZfqHWZwklj2YtADhypYQGBypVv+S+SUUUKECt7wTwLyG+Z+bu+pCSkf7dsg4s c0y7cdr7+xydCVT/h5fosxCvZV6MRAK/rIf1byaIRy62BtTV5E/iMz90KeBFt+gBiql1 eURr48E1yDpabtbVNXwSw5zLo+WJTSvCEH6ahX7BXb0f4sdhj0M0d8emJRfuqMqSezNi jT6BV4CiuNiqxqvdoOqggA2PMjAh1xdfPoOmEyFbxiW9xAR5HslKF8JicKwDg6zCGG23 ld1TxsDyXoeoS2Gh+F4/aVJ4VVVKdfkaFOZLY+/LiIr/0GayowgU3LaqhAcrax4KUngc Jxng== X-Gm-Message-State: AGi0PubleLuzj8leSU+QIJ3TBaoGNfrMXvOUi/+JUZOse8goe32wP6sl 9cyySyKN/FyYheLtBBPOM32TUA== X-Google-Smtp-Source: APiQypKmCeYifa3cirZ/DIUOjt014kXlMLmlAxwLPSf6UL0vF1T1XfAndFyIPskVlxtnbXWOHNwFMA== X-Received: by 2002:a37:e310:: with SMTP id y16mr18741966qki.275.1587420733327; Mon, 20 Apr 2020 15:12:13 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:e6b6]) by smtp.gmail.com with ESMTPSA id o68sm568301qka.110.2020.04.20.15.12.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Apr 2020 15:12:12 -0700 (PDT) From: Johannes Weiner To: Joonsoo Kim , Alex Shi Cc: Shakeel Butt , Hugh Dickins , Michal Hocko , "Kirill A. Shutemov" , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 17/18] mm: memcontrol: delete unused lrucare handling Date: Mon, 20 Apr 2020 18:11:25 -0400 Message-Id: <20200420221126.341272-18-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200420221126.341272-1-hannes@cmpxchg.org> References: <20200420221126.341272-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Johannes Weiner Reviewed-by: Joonsoo Kim --- include/linux/memcontrol.h | 5 ++-- kernel/events/uprobes.c | 3 +- mm/filemap.c | 2 +- mm/huge_memory.c | 7 ++--- mm/khugepaged.c | 4 +-- mm/memcontrol.c | 57 +++----------------------------------- mm/memory.c | 8 +++--- mm/migrate.c | 2 +- mm/shmem.c | 2 +- mm/swap_state.c | 2 +- mm/userfaultfd.c | 2 +- 11 files changed, 21 insertions(+), 73 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d458f1d90aa4..4b868e5a687f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -357,8 +357,7 @@ static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg, enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, struct mem_cgroup *memcg); -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, - bool lrucare); +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); void mem_cgroup_uncharge(struct page *page); void mem_cgroup_uncharge_list(struct list_head *page_list); @@ -839,7 +838,7 @@ static inline enum mem_cgroup_protection mem_cgroup_protected( } static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, bool lrucare) + gfp_t gfp_mask) { return 0; } diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 4253c153e985..eddc8db96027 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -167,8 +167,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, addr + PAGE_SIZE); if (new_page) { - err = mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL, - false); + err = mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL); if (err) return err; } diff --git a/mm/filemap.c b/mm/filemap.c index a10bd6696049..f73b221314df 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -845,7 +845,7 @@ static int __add_to_page_cache_locked(struct page *page, page->index = offset; if (!huge) { - error = mem_cgroup_charge(page, current->mm, gfp_mask, false); + error = mem_cgroup_charge(page, current->mm, gfp_mask); if (error) goto error; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0b33eaf0740a..35a716720e26 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -593,7 +593,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, VM_BUG_ON_PAGE(!PageCompound(page), page); - if (mem_cgroup_charge(page, vma->vm_mm, gfp, false)) { + if (mem_cgroup_charge(page, vma->vm_mm, gfp)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); count_vm_event(THP_FAULT_FALLBACK_CHARGE); @@ -1276,7 +1276,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, vmf->address, page_to_nid(page)); if (unlikely(!pages[i] || mem_cgroup_charge(pages[i], vma->vm_mm, - GFP_KERNEL, false))) { + GFP_KERNEL))) { if (pages[i]) put_page(pages[i]); while (--i >= 0) @@ -1430,8 +1430,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) goto out; } - if (unlikely(mem_cgroup_charge(new_page, vma->vm_mm, huge_gfp, - false))) { + if (unlikely(mem_cgroup_charge(new_page, vma->vm_mm, huge_gfp))) { put_page(new_page); split_huge_pmd(vma, vmf->pmd, vmf->address); if (page) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 5cf8082fb038..28c6d84db4ee 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -973,7 +973,7 @@ static void collapse_huge_page(struct mm_struct *mm, goto out_nolock; } - if (unlikely(mem_cgroup_charge(new_page, mm, gfp, false))) { + if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out_nolock; } @@ -1527,7 +1527,7 @@ static void collapse_file(struct mm_struct *mm, goto out; } - if (unlikely(mem_cgroup_charge(new_page, mm, gfp, false))) { + if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { result = SCAN_CGROUP_CHARGE_FAIL; goto out; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1d7408a8744a..a8cce52b6b4d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2601,51 +2601,9 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) css_put_many(&memcg->css, nr_pages); } -static void lock_page_lru(struct page *page, int *isolated) +static void commit_charge(struct page *page, struct mem_cgroup *memcg) { - pg_data_t *pgdat = page_pgdat(page); - - spin_lock_irq(&pgdat->lru_lock); - if (PageLRU(page)) { - struct lruvec *lruvec; - - lruvec = mem_cgroup_page_lruvec(page, pgdat); - ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, page_lru(page)); - *isolated = 1; - } else - *isolated = 0; -} - -static void unlock_page_lru(struct page *page, int isolated) -{ - pg_data_t *pgdat = page_pgdat(page); - - if (isolated) { - struct lruvec *lruvec; - - lruvec = mem_cgroup_page_lruvec(page, pgdat); - VM_BUG_ON_PAGE(PageLRU(page), page); - SetPageLRU(page); - add_page_to_lru_list(page, lruvec, page_lru(page)); - } - spin_unlock_irq(&pgdat->lru_lock); -} - -static void commit_charge(struct page *page, struct mem_cgroup *memcg, - bool lrucare) -{ - int isolated; - VM_BUG_ON_PAGE(page->mem_cgroup, page); - - /* - * In some cases, SwapCache and FUSE(splice_buf->radixtree), the page - * may already be on some other mem_cgroup's LRU. Take care of it. - */ - if (lrucare) - lock_page_lru(page, &isolated); - /* * Nobody should be changing or seriously looking at * page->mem_cgroup at this point: @@ -2661,9 +2619,6 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg, * have the page locked */ page->mem_cgroup = memcg; - - if (lrucare) - unlock_page_lru(page, isolated); } #ifdef CONFIG_MEMCG_KMEM @@ -6433,22 +6388,18 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, * @page: page to charge * @mm: mm context of the victim * @gfp_mask: reclaim mode - * @lrucare: page might be on the LRU already * * Try to charge @page to the memcg that @mm belongs to, reclaiming * pages according to @gfp_mask if necessary. * * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, - bool lrucare) +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) { unsigned int nr_pages = hpage_nr_pages(page); struct mem_cgroup *memcg = NULL; int ret = 0; - VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); - if (mem_cgroup_disabled()) goto out; @@ -6482,7 +6433,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, if (ret) goto out_put; - commit_charge(page, memcg, lrucare); + commit_charge(page, memcg); local_irq_disable(); mem_cgroup_charge_statistics(memcg, page, nr_pages); @@ -6685,7 +6636,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage) page_counter_charge(&memcg->memsw, nr_pages); css_get_many(&memcg->css, nr_pages); - commit_charge(newpage, memcg, false); + commit_charge(newpage, memcg); local_irq_save(flags); mem_cgroup_charge_statistics(memcg, newpage, nr_pages); diff --git a/mm/memory.c b/mm/memory.c index 5d266532fc40..0ad4db56bea2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2677,7 +2677,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) } } - if (mem_cgroup_charge(new_page, mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(new_page, mm, GFP_KERNEL)) goto oom_free_new; cgroup_throttle_swaprate(new_page, GFP_KERNEL); @@ -3136,7 +3136,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) /* Tell memcg to use swap ownership records */ SetPageSwapCache(page); err = mem_cgroup_charge(page, vma->vm_mm, - GFP_KERNEL, false); + GFP_KERNEL); ClearPageSwapCache(page); if (err) goto out_page; @@ -3360,7 +3360,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) if (!page) goto oom; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) goto oom_free_page; cgroup_throttle_swaprate(page, GFP_KERNEL); @@ -3856,7 +3856,7 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf) if (!vmf->cow_page) return VM_FAULT_OOM; - if (mem_cgroup_charge(vmf->cow_page, vma->vm_mm, GFP_KERNEL, false)) { + if (mem_cgroup_charge(vmf->cow_page, vma->vm_mm, GFP_KERNEL)) { put_page(vmf->cow_page); return VM_FAULT_OOM; } diff --git a/mm/migrate.c b/mm/migrate.c index a3361c744069..ced652d069ee 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2792,7 +2792,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, if (unlikely(anon_vma_prepare(vma))) goto abort; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) goto abort; /* diff --git a/mm/shmem.c b/mm/shmem.c index 966f150a4823..add10d448bc6 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -624,7 +624,7 @@ static int shmem_add_to_page_cache(struct page *page, page->index = index; if (!PageSwapCache(page)) { - error = mem_cgroup_charge(page, charge_mm, gfp, false); + error = mem_cgroup_charge(page, charge_mm, gfp); if (error) { if (PageTransHuge(page)) { count_vm_event(THP_FILE_FALLBACK); diff --git a/mm/swap_state.c b/mm/swap_state.c index f3b9073bfff3..26fded65c30d 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -427,7 +427,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, if (add_to_swap_cache(page, entry, gfp_mask & GFP_KERNEL)) goto fail_unlock; - if (mem_cgroup_charge(page, NULL, gfp_mask & GFP_KERNEL, false)) + if (mem_cgroup_charge(page, NULL, gfp_mask & GFP_KERNEL)) goto fail_delete; /* Initiate read into locked page */ diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 2745489415cc..7f5194046b01 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -96,7 +96,7 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, __SetPageUptodate(page); ret = -ENOMEM; - if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL)) goto out_release; _dst_pte = pte_mkdirty(mk_pte(page, dst_vma->vm_page_prot));