From patchwork Tue Feb 18 05:41:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Rientjes X-Patchwork-Id: 11387847 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EDC471395 for ; Tue, 18 Feb 2020 05:41:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B9E6422B48 for ; Tue, 18 Feb 2020 05:41:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RtOguG0s" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B9E6422B48 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F30506B0003; Tue, 18 Feb 2020 00:41:43 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EE14A6B0006; Tue, 18 Feb 2020 00:41:43 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF7246B0007; Tue, 18 Feb 2020 00:41:43 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0019.hostedemail.com [216.40.44.19]) by kanga.kvack.org (Postfix) with ESMTP id C6F136B0003 for ; Tue, 18 Feb 2020 00:41:43 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5D6652C96 for ; Tue, 18 Feb 2020 05:41:43 +0000 (UTC) X-FDA: 76502150886.29.wind92_492f4c6da6558 X-Spam-Summary: 30,2,0,19a17c3ff9791646,d41d8cd98f00b204,rientjes@google.com,:akpm@linux-foundation.org:kirill.shutemov@linux.intel.com:rppt@linux.ibm.com:jcline@redhat.com:linux-kernel@vger.kernel.org:,RULES_HIT:41:355:379:800:960:973:988:989:1260:1277:1313:1314:1345:1437:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:1801:2393:2559:2562:3138:3139:3140:3141:3142:3152:3354:3865:3866:3867:3868:3870:3871:3874:4605:5007:6119:6261:6653:7903:8784:10010:10400:11026:11232:11473:11657:11658:11914:12043:12296:12297:12438:12517:12519:13138:13161:13229:13231:13439:14181:14659:14721:21080:21433:21444:21627:21990:30054:30070,0,RBL:209.85.215.194:@google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: wind92_492f4c6da6558 X-Filterd-Recvd-Size: 5996 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Tue, 18 Feb 2020 05:41:42 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id u12so10015155pgb.10 for ; Mon, 17 Feb 2020 21:41:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:user-agent:mime-version; bh=esjIwXMBf/hgdFfGBIjggPZwNclD7vRz+/fGr5r+Z2E=; b=RtOguG0sNrSmHKOP6ewO8LT+ei2pgaBCnqI0Sa3rNdfs8HCP6LII4+OueoQw83a3VK j40L1T/Pf5byRFDukW/msBNJ+loNhBNgVRAMHc3bLny4aOd/MWF+PxHyusUToa0F210C NZ3HGDQeFT33d96fHhZqPTmIlr7x+grAu/9sZNSdEQwAUP53vz+LKKfk6+3vR4yr6KZ/ 2A2lFsrM1NyeatXa/yRCULqvBVuK0wY9uDfHsC3JhtTyGKhE4oYXuuib7mPH81oADaTD zss65SrCuaScvuvCmZzFmouw79wZi9eMDB741ilsfMj1E8jc+XsfESiUDOA8U7/VXxTQ tk+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:user-agent :mime-version; bh=esjIwXMBf/hgdFfGBIjggPZwNclD7vRz+/fGr5r+Z2E=; b=gDXhTZvhUu8wbUusG2J3LcaO1nRxzXJYm0CSq396Rr1uiU338JzCNtUFl/Y97/xd57 ZqRMBgcX//VkLavlILI8Y8DD5JrQ7KemQqMxj6zHrr+U/r/fMdHhIoYcxTVW5UBgIUqa P5KpdBR9qwrLTXG1uAeDdxb86TanAEZT5ZRAu1AyNlFrMPjhBvM/hFc4CCsxMwT0Nchm nJ99jJIv7BPSxo36MjmHgWhB4G2anmNlAjUEMZFSqUJoZy1zKEXq3eQUhRSzfNW6XaUr Q6E+LlG5zgRRbsdx1K/B4q4IVV6wS4dS4/c/9qlxyVNh1A2tWTNhod7k5EIFKuB10Ugv eBjA== X-Gm-Message-State: APjAAAUPfSrT76JuPE6xJtX2K8JyIIjdyg0hCMj9UwD4Dn+2Emfe4RTi YnhoP0l+r3mMqBKwxZeFMjapsg== X-Google-Smtp-Source: APXvYqzs308xRKAwyAmQ7Qew/TEy0TtQ8KfvWXBSdmvdvthEHpFolOD3XpfjaX7qYllV6Lt78Mk+iw== X-Received: by 2002:a63:ed49:: with SMTP id m9mr20824593pgk.304.1582004501721; Mon, 17 Feb 2020 21:41:41 -0800 (PST) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id f3sm2731827pga.38.2020.02.17.21.41.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Feb 2020 21:41:41 -0800 (PST) Date: Mon, 17 Feb 2020 21:41:40 -0800 (PST) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrew Morton cc: "Kirill A. Shutemov" , Mike Rapoport , Jeremy Cline , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [patch] mm, thp: track fallbacks due to failed memcg charges separately Message-ID: User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The thp_fault_fallback stat in either /proc/vmstat is incremented if either the hugepage allocation fails through the page allocator or the hugepage charge fails through mem cgroup. This patch leaves this field untouched but adds a new field, thp_fault_fallback_charge, which is incremented only when the mem cgroup charge fails. This distinguishes between faults that want to be backed by hugepages but fail due to fragmentation (or low memory conditions) and those that fail due to mem cgroup limits. That can be used to determine the impact of fragmentation on the system by excluding faults that failed due to memcg usage. Signed-off-by: David Rientjes --- Documentation/admin-guide/mm/transhuge.rst | 5 +++++ include/linux/vm_event_item.h | 1 + mm/huge_memory.c | 2 ++ mm/vmstat.c | 1 + 4 files changed, 9 insertions(+) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -310,6 +310,11 @@ thp_fault_fallback is incremented if a page fault fails to allocate a huge page and instead falls back to using small pages. +thp_fault_fallback_charge + is incremented if a page fault fails to charge a huge page and + instead falls back to using small pages even through the + allocation was successful. + thp_collapse_alloc_failed is incremented if khugepaged found a range of pages that should be collapsed into one huge page but failed diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -73,6 +73,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_TRANSPARENT_HUGEPAGE THP_FAULT_ALLOC, THP_FAULT_FALLBACK, + THP_FAULT_FALLBACK_CHARGE, THP_COLLAPSE_ALLOC, THP_COLLAPSE_ALLOC_FAILED, THP_FILE_ALLOC, diff --git a/mm/huge_memory.c b/mm/huge_memory.c --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -597,6 +597,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, if (mem_cgroup_try_charge_delay(page, vma->vm_mm, gfp, &memcg, true)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); + count_vm_event(THP_FAULT_FALLBACK_CHARGE); return VM_FAULT_FALLBACK; } @@ -1406,6 +1407,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) put_page(page); ret |= VM_FAULT_FALLBACK; count_vm_event(THP_FAULT_FALLBACK); + count_vm_event(THP_FAULT_FALLBACK_CHARGE); goto out; } diff --git a/mm/vmstat.c b/mm/vmstat.c --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1254,6 +1254,7 @@ const char * const vmstat_text[] = { #ifdef CONFIG_TRANSPARENT_HUGEPAGE "thp_fault_alloc", "thp_fault_fallback", + "thp_fault_fallback_charge", "thp_collapse_alloc", "thp_collapse_alloc_failed", "thp_file_alloc",