From patchwork Sun Aug 30 21:09:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 11745337 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B3C16109B for ; Sun, 30 Aug 2020 21:09:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7660F2083E for ; Sun, 30 Aug 2020 21:09:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="i4m50/xw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7660F2083E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AE9FF6B0002; Sun, 30 Aug 2020 17:09:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A99C86B0003; Sun, 30 Aug 2020 17:09:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98B4F6B0005; Sun, 30 Aug 2020 17:09:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0070.hostedemail.com [216.40.44.70]) by kanga.kvack.org (Postfix) with ESMTP id 830F06B0002 for ; Sun, 30 Aug 2020 17:09:57 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id DA0303643 for ; Sun, 30 Aug 2020 21:09:56 +0000 (UTC) X-FDA: 77208477192.04.deer65_4a064cc2708a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id B392B8008B0C for ; Sun, 30 Aug 2020 21:09:56 +0000 (UTC) X-Spam-Summary: 1,0,0,7a9140488d97ffaf,d41d8cd98f00b204,hughd@google.com,,RULES_HIT:41:355:379:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1544:1593:1594:1711:1730:1747:1777:1792:2194:2199:2393:2553:2559:2562:2904:3138:3139:3140:3141:3142:3152:3354:3503:3504:3865:3866:3867:3868:3870:3871:3872:3874:4118:4321:5007:6117:6261:6653:8957:9010:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12740:12895:12986:13161:13229:13439:14096:14097:14181:14394:14659:14721:21080:21433:21444:21451:21627:21740:21809:21966:21990:30054:30056:30090,0,RBL:209.85.161.66:@google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04y89sxex545iu697wjkht16qyij1ockw4tfyjkmmzpgn51zcqi38gsq6cptubh.m18pmwn9z36wsgh7sxspjbxr1cn1kek4m8p78ajdko7ymdwuq53q7h5kipe5smy.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: deer65_4a064cc2708a X-Filterd-Recvd-Size: 7289 Received: from mail-oo1-f66.google.com (mail-oo1-f66.google.com [209.85.161.66]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Sun, 30 Aug 2020 21:09:56 +0000 (UTC) Received: by mail-oo1-f66.google.com with SMTP id x6so972456ooe.8 for ; Sun, 30 Aug 2020 14:09:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=oObpY48XdTvUxTDfJa2g2q0M3QkpTwHKIJDdTDdY8Dc=; b=i4m50/xw542tFUgH/mrLiPw3QaF7oJPHvcvyrQNdsD5PuIsn7W3hy9gvCOKFahZdG4 x5wlT7nWsbtH0ybkl7rrV0P/j7CTpOv6XGZxnfKp7bIPvrdVPi5Kz6DUkWYiIyx6dm/9 d6Bh1gdyGS+EtQl9F8IQcVaysU0gPaoNSzvW86KV3kxakcIB7/9zL38vCYITyJUr1nZt GzugCy2VRn7kA9ZpU/B/0BJ34vqlU8TURYZUieladRuszVW7ekPnkeSBcQO9DyHnNF0y K3Cqt7A7xgxdhqMkLo+CkA7qs8Nti70TWjutypOfPxq2Xfeq6UE/upeJjfXLJdwLgXAM NQpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=oObpY48XdTvUxTDfJa2g2q0M3QkpTwHKIJDdTDdY8Dc=; b=m0Vl5yaXjfyowkUBCehCeHXjqQ+45h3NyyULtfyTTq9DgXX7AQByTvOwpQhRw9U6N7 WScf2tNIbrPnS9nu1H/Nf4Nh9gwzPpaiSuFC/wSH0LzvLBVZqtsYVBpLz1Mn2hPmYi4A Mqhy2Pif3bdJucw2uyKkXXlzhLZNtedCgORvZUg3vTKh353/LjZ45onJ6K5jtgDr10PY wvKF5lR4z0mxF3cBqV/aqxFaU+OPUFJsefmb4ANpdIwtFtpEbQIPy0nRUC/QJbRo3sVA 7mrX6DcA6wWwYRrvGUQeV1lusl5RJt6Cop3kQ3yOqK0ISvqiaV1pVrv/47eq26LIMZ6w CKCg== X-Gm-Message-State: AOAM5314uSOllTL887SVWkupDFTWyjhn58xXFjK+3Uu42be4fCT3C+z+ NhUk+GlH4HkWW1ZIcByHiQ4+mQ== X-Google-Smtp-Source: ABdhPJwMB1Gz3yryOwBzNp2wDtYJ5LUzxLS3b/PHz2HcPX4AOSw58hhGqnGTSjHBrof7blTxs0ABYg== X-Received: by 2002:a4a:bb05:: with SMTP id f5mr3475000oop.5.1598821795429; Sun, 30 Aug 2020 14:09:55 -0700 (PDT) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id g4sm1363777otq.33.2020.08.30.14.09.53 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Sun, 30 Aug 2020 14:09:54 -0700 (PDT) Date: Sun, 30 Aug 2020 14:09:52 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Andrew Morton cc: Alex Shi , Johannes Weiner , Michal Hocko , Mike Kravetz , Shakeel Butt , Matthew Wilcox , Qian Cai , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 5/5] mlock: fix unevictable_pgs event counts on THP In-Reply-To: Message-ID: References: User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 X-Rspamd-Queue-Id: B392B8008B0C X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: 5.8 commit 5d91f31faf8e ("mm: swap: fix vmstats for huge page") has established that vm_events should count every subpage of a THP, including unevictable_pgs_culled and unevictable_pgs_rescued; but lru_cache_add_inactive_or_unevictable() was not doing so for unevictable_pgs_mlocked, and mm/mlock.c was not doing so for unevictable_pgs mlocked, munlocked, cleared and stranded. Fix them; but THPs don't go the pagevec way in mlock.c, so no fixes needed on that path. Fixes: 5d91f31faf8e ("mm: swap: fix vmstats for huge page") Signed-off-by: Hugh Dickins Reviewed-by: Shakeel Butt Acked-by: Yang Shi --- I've only checked UNEVICTABLEs: there may be more inconsistencies left. The check_move_unevictable_pages() patch brought me to this one, but this is more important because mlock works on all THPs, without needing special testing "force". But, it's still just monotonically increasing event counts, so not all that important. mm/mlock.c | 24 +++++++++++++++--------- mm/swap.c | 6 +++--- 2 files changed, 18 insertions(+), 12 deletions(-) --- 5.9-rc2/mm/mlock.c 2020-08-16 17:32:50.665507048 -0700 +++ linux/mm/mlock.c 2020-08-28 17:42:07.975278411 -0700 @@ -58,11 +58,14 @@ EXPORT_SYMBOL(can_do_mlock); */ void clear_page_mlock(struct page *page) { + int nr_pages; + if (!TestClearPageMlocked(page)) return; - mod_zone_page_state(page_zone(page), NR_MLOCK, -thp_nr_pages(page)); - count_vm_event(UNEVICTABLE_PGCLEARED); + nr_pages = thp_nr_pages(page); + mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages); + count_vm_events(UNEVICTABLE_PGCLEARED, nr_pages); /* * The previous TestClearPageMlocked() corresponds to the smp_mb() * in __pagevec_lru_add_fn(). @@ -76,7 +79,7 @@ void clear_page_mlock(struct page *page) * We lost the race. the page already moved to evictable list. */ if (PageUnevictable(page)) - count_vm_event(UNEVICTABLE_PGSTRANDED); + count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages); } } @@ -93,9 +96,10 @@ void mlock_vma_page(struct page *page) VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page); if (!TestSetPageMlocked(page)) { - mod_zone_page_state(page_zone(page), NR_MLOCK, - thp_nr_pages(page)); - count_vm_event(UNEVICTABLE_PGMLOCKED); + int nr_pages = thp_nr_pages(page); + + mod_zone_page_state(page_zone(page), NR_MLOCK, nr_pages); + count_vm_events(UNEVICTABLE_PGMLOCKED, nr_pages); if (!isolate_lru_page(page)) putback_lru_page(page); } @@ -138,7 +142,7 @@ static void __munlock_isolated_page(stru /* Did try_to_unlock() succeed or punt? */ if (!PageMlocked(page)) - count_vm_event(UNEVICTABLE_PGMUNLOCKED); + count_vm_events(UNEVICTABLE_PGMUNLOCKED, thp_nr_pages(page)); putback_lru_page(page); } @@ -154,10 +158,12 @@ static void __munlock_isolated_page(stru */ static void __munlock_isolation_failed(struct page *page) { + int nr_pages = thp_nr_pages(page); + if (PageUnevictable(page)) - __count_vm_event(UNEVICTABLE_PGSTRANDED); + __count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages); else - __count_vm_event(UNEVICTABLE_PGMUNLOCKED); + __count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages); } /** --- 5.9-rc2/mm/swap.c 2020-08-16 17:32:50.709507284 -0700 +++ linux/mm/swap.c 2020-08-28 17:42:07.975278411 -0700 @@ -494,14 +494,14 @@ void lru_cache_add_inactive_or_unevictab unevictable = (vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) == VM_LOCKED; if (unlikely(unevictable) && !TestSetPageMlocked(page)) { + int nr_pages = thp_nr_pages(page); /* * We use the irq-unsafe __mod_zone_page_stat because this * counter is not modified from interrupt context, and the pte * lock is held(spinlock), which implies preemption disabled. */ - __mod_zone_page_state(page_zone(page), NR_MLOCK, - thp_nr_pages(page)); - count_vm_event(UNEVICTABLE_PGMLOCKED); + __mod_zone_page_state(page_zone(page), NR_MLOCK, nr_pages); + count_vm_events(UNEVICTABLE_PGMLOCKED, nr_pages); } lru_cache_add(page); }