From patchwork Fri Oct 18 00:28:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Gushchin X-Patchwork-Id: 11197401 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0D8F915AB for ; Fri, 18 Oct 2019 00:29:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C1BB12245D for ; Fri, 18 Oct 2019 00:29:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=fb.com header.i=@fb.com header.b="A0BdNat4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C1BB12245D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=fb.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A4BC48E000D; Thu, 17 Oct 2019 20:28:42 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 89C8E8E0010; Thu, 17 Oct 2019 20:28:42 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70D088E0010; Thu, 17 Oct 2019 20:28:42 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id 36D548E000D for ; Thu, 17 Oct 2019 20:28:42 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id C5AD24833 for ; Fri, 18 Oct 2019 00:28:41 +0000 (UTC) X-FDA: 76055019642.26.scarf33_4bdf32aaf5c5c X-Spam-Summary: 2,0,0,624b1e6a62e1524e,d41d8cd98f00b204,prvs=519417b754=guro@fb.com,::mhocko@kernel.org:hannes@cmpxchg.org:linux-kernel@vger.kernel.org:kernel-team@fb.com:shakeelb@google.com:vdavydov.dev@gmail.com:longman@redhat.com:cl@linux.com:guro@fb.com,RULES_HIT:2:41:355:379:541:800:960:973:982:988:989:1260:1261:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1730:1747:1777:1792:1801:2194:2199:2393:2559:2562:2610:2898:3138:3139:3140:3141:3142:3165:3355:3865:3866:3867:3868:3870:3871:3872:4049:4120:4321:4423:4605:5007:6117:6119:6261:6653:7903:7904:8957:9010:10004:11026:11658:11914:12043:12291:12296:12297:12438:12555:12683:12895:13161:13229:13255:13851:14394:21080:21324:21433:21451:21611:21627:21740:21796:21939:21972:30029:30034:30036:30054:30064:30070,0,RBL:67.231.153.30:@fb.com:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:3,LUA_SUMMARY:none X-HE-Tag: scarf33_4bdf32aaf5c5c X-Filterd-Recvd-Size: 9095 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Oct 2019 00:28:41 +0000 (UTC) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x9I0NcwK008470 for ; Thu, 17 Oct 2019 17:28:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=facebook; bh=V0YM/pCY9GLC/OQdYE1wIpM4De9idAVu4Mjnr3cz5NE=; b=A0BdNat4X8rp15Sa2mMZvJJHR8nBRZ8XtxdWss1kzxwAIB9QvjwagT2Rsl5exfykhW5L +qXM5cq2YQvkTNfetyn1ich9cAe3PbkJzDD0axto7+pmMG6O/uaz/cYjhONLR0I7n9Ad MhVrYLUWWRfOu5vxaFXUnuw/Mj5FBhCTGU0= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com with ESMTP id 2vpw9r9fae-13 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 17 Oct 2019 17:28:40 -0700 Received: from 2401:db00:30:600c:face:0:39:0 (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Thu, 17 Oct 2019 17:28:36 -0700 Received: by devvm2643.prn2.facebook.com (Postfix, from userid 111017) id 1FE2218CE8491; Thu, 17 Oct 2019 17:28:34 -0700 (PDT) Smtp-Origin-Hostprefix: devvm From: Roman Gushchin Smtp-Origin-Hostname: devvm2643.prn2.facebook.com To: CC: Michal Hocko , Johannes Weiner , , , Shakeel Butt , Vladimir Davydov , Waiman Long , Christoph Lameter , Roman Gushchin Smtp-Origin-Cluster: prn2c23 Subject: [PATCH 15/16] tools/cgroup: make slabinfo.py compatible with new slab controller Date: Thu, 17 Oct 2019 17:28:19 -0700 Message-ID: <20191018002820.307763-16-guro@fb.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191018002820.307763-1-guro@fb.com> References: <20191018002820.307763-1-guro@fb.com> X-FB-Internal: Safe MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-17_07:2019-10-17,2019-10-17 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 mlxlogscore=723 lowpriorityscore=0 mlxscore=0 suspectscore=1 phishscore=0 clxscore=1015 adultscore=0 spamscore=0 bulkscore=0 priorityscore=1501 malwarescore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1908290000 definitions=main-1910180001 X-FB-Internal: deliver X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make slabinfo.py compatible with the new slab controller. Because there are no more per-memcg kmem_caches, and also there is no list of all slab pages in the system, it has to walk over all pages and filter out slab pages belonging to non-root kmem_caches. Then it counts objects belonging to the given cgroup. It might sound as a very slow operation, however it's not so slow. It takes about 30s seconds to walk over 8Gb of slabs out of 64Gb, and filter out all objects belonging to the cgroup of interest. Also, it provides an accurate number of active objects, which isn't true for the old slab controller. The script is backward compatible and works for both kernel versions. Signed-off-by: Roman Gushchin --- tools/cgroup/slabinfo.py | 105 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 98 insertions(+), 7 deletions(-) diff --git a/tools/cgroup/slabinfo.py b/tools/cgroup/slabinfo.py index 40b01a6ec4b0..79909ee36fe3 100755 --- a/tools/cgroup/slabinfo.py +++ b/tools/cgroup/slabinfo.py @@ -8,7 +8,10 @@ import argparse import sys from drgn.helpers.linux import list_for_each_entry, list_empty -from drgn import container_of +from drgn.helpers.linux import for_each_page +from drgn.helpers.linux.cpumask import for_each_online_cpu +from drgn.helpers.linux.percpu import per_cpu_ptr +from drgn import container_of, FaultError DESC = """ @@ -47,7 +50,7 @@ def is_root_cache(s): def cache_name(s): if is_root_cache(s): - return s.name + return s.name.string_().decode('utf-8') else: return s.memcg_params.root_cache.name.string_().decode('utf-8') @@ -99,12 +102,16 @@ def slub_get_slabinfo(s, cfg): # SLAB-specific functions can be added here... -def cache_show(s, cfg): +def cache_show(s, cfg, objs): if cfg['allocator'] == 'SLUB': sinfo = slub_get_slabinfo(s, cfg) else: err('SLAB isn\'t supported yet') + if cfg['shared_slab_pages']: + sinfo['active_objs'] = objs + sinfo['num_objs'] = objs + print('%-17s %6lu %6lu %6u %4u %4d' ' : tunables %4u %4u %4u' ' : slabdata %6lu %6lu %6lu' % ( @@ -127,9 +134,60 @@ def detect_kernel_config(): else: err('Can\'t determine the slab allocator') + if prog.type('struct memcg_cache_params').members[1][1] == 'memcg_cache': + cfg['shared_slab_pages'] = True + else: + cfg['shared_slab_pages'] = False + return cfg +def for_each_slab_page(prog): + PGSlab = 1 << prog.constant('PG_slab') + PGHead = 1 << prog.constant('PG_head') + + for page in for_each_page(prog): + try: + if page.flags.value_() & PGSlab: + yield page + except FaultError: + pass + + +# it doesn't find all objects, because by default full slabs are not +# placed to the full list. however, it can be used in certain cases. +def for_each_slab_page_fast(prog): + for s in list_for_each_entry('struct kmem_cache', + prog['slab_caches'].address_of_(), + 'list'): + if is_root_cache(s): + continue + + if s.cpu_partial: + for cpu in for_each_online_cpu(prog): + cpu_slab = per_cpu_ptr(s.cpu_slab, cpu) + if cpu_slab.page: + yield cpu_slab.page + + page = cpu_slab.partial + while page: + yield page + page = page.next + + for node in range(prog['nr_online_nodes'].value_()): + n = s.node[node] + + for page in list_for_each_entry('struct page', + n.partial.address_of_(), + 'slab_list'): + yield page + + for page in list_for_each_entry('struct page', + n.full.address_of_(), + 'slab_list'): + yield page + + def main(): parser = argparse.ArgumentParser(description=DESC, formatter_class=argparse.RawTextHelpFormatter) @@ -150,10 +208,43 @@ def main(): ' : tunables ' ' : slabdata ') - for s in list_for_each_entry('struct kmem_cache', - memcg.kmem_caches.address_of_(), - 'memcg_params.kmem_caches_node'): - cache_show(s, cfg) + if cfg['shared_slab_pages']: + memcg_ptrs = set() + stats = {} + caches = {} + + # find memcg pointers belonging to the specified cgroup + for ptr in list_for_each_entry('struct mem_cgroup_ptr', + memcg.kmem_memcg_ptr_list.address_of_(), + 'list'): + memcg_ptrs.add(ptr.value_()) + + # look over all slab pages, belonging to non-root memcgs + # and look for objects belonging to the given memory cgroup + for page in for_each_slab_page(prog): + cache = page.slab_cache + if not cache or is_root_cache(cache): + continue + addr = cache.value_() + caches[addr] = cache + memcg_vec = page.mem_cgroup_vec + + if addr not in stats: + stats[addr] = 0 + + for i in range(oo_objects(cache)): + if memcg_vec[i].value_() in memcg_ptrs: + stats[addr] += 1 + + for addr in caches: + if stats[addr] > 0: + cache_show(caches[addr], cfg, stats[addr]) + + else: + for s in list_for_each_entry('struct kmem_cache', + memcg.kmem_caches.address_of_(), + 'memcg_params.kmem_caches_node'): + cache_show(s, cfg, None) main()