From patchwork Tue Dec 21 21:53:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 12690647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BED91C433F5 for ; Tue, 21 Dec 2021 21:53:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 108C96B0071; Tue, 21 Dec 2021 16:53:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0B8576B0073; Tue, 21 Dec 2021 16:53:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC1A36B0074; Tue, 21 Dec 2021 16:53:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id DAEE76B0071 for ; Tue, 21 Dec 2021 16:53:50 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9A2961801E8C6 for ; Tue, 21 Dec 2021 21:53:50 +0000 (UTC) X-FDA: 78943154220.04.DEE7CBE Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf15.hostedemail.com (Postfix) with ESMTP id 058DBA002B for ; Tue, 21 Dec 2021 21:53:42 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id t29-20020a62d15d000000b004baa073f34fso272518pfl.12 for ; Tue, 21 Dec 2021 13:53:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=ubsJbz4XTDj0Y8BVp+tyzvUQoOJ8p9o6eKqcemly/7o=; b=cN60PblALTbqNU/fDADDM/JTRbZx6LxiU706eGsSoWw9mgARm1kVuelsKrgV+Dqcyd xuOqqKM+yLG689hUnCvrrE4AEZKqY60WwsifQDci0DtqnpE13rVnj81Z7sE+rI+qXl47 F8vrDQREIceKqe2mi5xX4Qao7BTG29zt2btQWg+Ws0ea6pee8KXO+bEhvVx10gG2KJHC EbG5ekDDHfBMF0kbIBZLzBV/lgvyDvNTLgm0LivY5CZRzTYG5I0htgUYw7bSM7y595NZ kv20cpVlNcJQbHS1m5GXgJ7FC96sAJPYkepFIQnMJnwikBAKJnffPq0XwnqflD1RYgL1 EaSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=ubsJbz4XTDj0Y8BVp+tyzvUQoOJ8p9o6eKqcemly/7o=; b=PPHx73oH/si7ahw46taBwmon7RZ0zZ720V1B5XJzqpieVjeqW+ToZfK47DVvpP4pWn 8erUFpEY72H/R2A3lXGNAOGomuQMkrfNNEGojybtsfQdj3/hIwU3S4YyT9X5iDfT55kN VYzIAnfKEFvv348PCNRWfofupOqz/qd5ArhRG4Txl7mNiFTGWJlMx7P0axC0+nnKpsev oOWYJeF+jjoKr51uaf2q6bX5otDgUySHCqEwQBc776SoU+lpXKQyTAQv+u8I4Ds8JYAy iCRxyW+Ev3HTSxnWn9SWOa60xwpdeDWESc5tcTDb7MZEI/1aNoOKtkLy14u/S3N3Mkb1 Ncxg== X-Gm-Message-State: AOAM533kHZhQcM5f6rpbeSxfZsVw2cREa0zohmG5cLMbMu/dYtQ0mFVx 2hTpgsMrCKAGLTDBd702ZAC+D95GzxAc2g== X-Google-Smtp-Source: ABdhPJwWQi4/qgSn1GBR6R+WRdImbaFdH+zC3nALV07H9X3SgI2Ua9OoL/cJh1NASCoZBq4z+v+SX11BJzWWsw== X-Received: from shakeelb.svl.corp.google.com ([2620:15c:2cd:202:fbbb:6368:35d1:5ac2]) (user=shakeelb job=sendgmr) by 2002:a17:902:e843:b0:148:f219:afb7 with SMTP id t3-20020a170902e84300b00148f219afb7mr73688plg.81.1640123629026; Tue, 21 Dec 2021 13:53:49 -0800 (PST) Date: Tue, 21 Dec 2021 13:53:36 -0800 Message-Id: <20211221215336.1922823-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.34.1.307.g9b7440fafd-goog Subject: [PATCH] memcg: add per-memcg vmalloc stat From: Shakeel Butt To: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 058DBA002B X-Stat-Signature: kzt7yyeismnenhjn91zxuyfineriktip Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=cN60PblA; spf=pass (imf15.hostedemail.com: domain of 37UzCYQgKCKQWLEOIIPFKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--shakeelb.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=37UzCYQgKCKQWLEOIIPFKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--shakeelb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1640123622-301456 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The kvmalloc* allocation functions can fallback to vmalloc allocations and more often on long running machines. In addition the kernel does have __GFP_ACCOUNT kvmalloc* calls. So, often on long running machines, the memory.stat does not tell the complete picture which type of memory is charged to the memcg. So add a per-memcg vmalloc stat. Signed-off-by: Shakeel Butt --- Documentation/admin-guide/cgroup-v2.rst | 3 +++ include/linux/memcontrol.h | 15 +++++++++++++++ mm/memcontrol.c | 1 + mm/vmalloc.c | 5 +++++ 4 files changed, 24 insertions(+) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 82c8dc91b2be..5aa368d165da 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1314,6 +1314,9 @@ PAGE_SIZE multiple when read back. sock (npn) Amount of memory used in network transmission buffers + vmalloc (npn) + Amount of memory used for vmap backed memory. + shmem Amount of cached filesystem data that is swap-backed, such as tmpfs, shm segments, shared anonymous mmap()s diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d76dad703580..000bfad6ff69 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -33,6 +33,7 @@ enum memcg_stat_item { MEMCG_SWAP = NR_VM_NODE_STAT_ITEMS, MEMCG_SOCK, MEMCG_PERCPU_B, + MEMCG_VMALLOC, MEMCG_NR_STAT, }; @@ -944,6 +945,15 @@ static inline void mod_memcg_state(struct mem_cgroup *memcg, local_irq_restore(flags); } +static inline void mod_memcg_page_state(struct page *page, + int idx, int val) +{ + struct mem_cgroup *memcg = page_memcg(page); + + if (!mem_cgroup_disabled() && memcg) + mod_memcg_state(memcg, idx, val); +} + static inline unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx) { return READ_ONCE(memcg->vmstats.state[idx]); @@ -1399,6 +1409,11 @@ static inline void mod_memcg_state(struct mem_cgroup *memcg, { } +static inline void mod_memcg_page_state(struct page *page, + int idx, int val) +{ +} + static inline unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx) { return 0; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7ae77608847e..7027a3cc416f 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1375,6 +1375,7 @@ static const struct memory_stat memory_stats[] = { { "pagetables", NR_PAGETABLE }, { "percpu", MEMCG_PERCPU_B }, { "sock", MEMCG_SOCK }, + { "vmalloc", MEMCG_VMALLOC }, { "shmem", NR_SHMEM }, { "file_mapped", NR_FILE_MAPPED }, { "file_dirty", NR_FILE_DIRTY }, diff --git a/mm/vmalloc.c b/mm/vmalloc.c index eb6e527a6b77..af67ce4fd402 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -39,6 +39,7 @@ #include #include #include +#include #include #include @@ -2626,6 +2627,9 @@ static void __vunmap(const void *addr, int deallocate_pages) unsigned int page_order = vm_area_page_order(area); int i; + mod_memcg_page_state(area->pages[0], MEMCG_VMALLOC, + -(int)area->nr_pages); + for (i = 0; i < area->nr_pages; i += 1U << page_order) { struct page *page = area->pages[i]; @@ -2964,6 +2968,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, page_order, nr_small_pages, area->pages); atomic_long_add(area->nr_pages, &nr_vmalloc_pages); + mod_memcg_page_state(area->pages[0], MEMCG_VMALLOC, area->nr_pages); /* * If not enough pages were obtained to accomplish an