From patchwork Thu Jul 16 16:51:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11667837 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9980C1392 for ; Thu, 16 Jul 2020 16:51:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 63EB620739 for ; Thu, 16 Jul 2020 16:51:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="YSxWDKID" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 63EB620739 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 647778D002B; Thu, 16 Jul 2020 12:51:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5F7D28D0005; Thu, 16 Jul 2020 12:51:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E5A58D002B; Thu, 16 Jul 2020 12:51:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id 356E78D0005 for ; Thu, 16 Jul 2020 12:51:14 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 93B802485 for ; Thu, 16 Jul 2020 16:51:13 +0000 (UTC) X-FDA: 77044529226.26.juice09_430263026f03 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 68DEE1804B654 for ; Thu, 16 Jul 2020 16:51:13 +0000 (UTC) X-Spam-Summary: 1,0,0,80e1364e0219877e,d41d8cd98f00b204,songmuchun@bytedance.com,,RULES_HIT:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1437:1515:1535:1543:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:4117:4250:4321:5007:6261:6653:7903:7904:9592:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13146:13230:13894:13972:14096:14181:14394:14721:21080:21444:21451:21627:21740:21987:21990:30012:30054:30064,0,RBL:209.85.214.194:@bytedance.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04ygogir65go1rb5h5kn9r5989mqnycnhnyqtkpho7n1mqm3yw6wqwosqzw61kt.i938j7oetojzgdg593s6snrsquq1u74mk4dm96co3a73t86jnqcr4ya4in8efbf.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: juice09_430263026f03 X-Filterd-Recvd-Size: 6494 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jul 2020 16:51:12 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id l6so4111870plt.7 for ; Thu, 16 Jul 2020 09:51:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=cF35zEvlnFpJY3nfpx97DjW7Gk4E2o0HW6YJ/cWLW0E=; b=YSxWDKIDfcBSYuOyxNod6mj4RwZI2A5Ll5tUvgkulF0i1xibjfZWN8vmrzek+38ikU 8gsuL3tBZfDCVkKWU8lx2Hwz1z4o2YZUoI8KnsmItXNHGEiEs6b4nS+QaJ84OFKURdtY c+ihceQOt7HZu5sC8dXqp7NOLN4Jbhcc+pmKCT9r5WDSCqMPq30iBCSbag32ALY9h045 kFqPQrEWbGiSxL8Z42U79YBcUe7A3ZAjPsiguhs34L9SeOtl/16hog2659hq+WoQtuhP 3HKzavr3m2z3T8DFH8FIBdxJlCYCZu4/5u1MJZRKM6X2wp8PLTLB3p7WlA0vRugtc3Qr kk8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=cF35zEvlnFpJY3nfpx97DjW7Gk4E2o0HW6YJ/cWLW0E=; b=l8piGCSPJwpoTe6cSr/KdDtLN7PG/+MDwu1t30358slgjZGqM9lqHqrF69nM5PdY7u EmwOVyCsy5xx5cN3Iv8AG7aWZIo5GJmkzhkcd28AOTZBiWcoKERArsKR4rnx0OAcwi0l NTeEIaaz2rn1JG/VPwSIi1OBppF1Sw/n7zp69PX/p/5yvO9oO1QnsRRQk4MIVZLWgSxz p+ArriXTO5aqpQo85czmWzrqc9hLBKI5Qlz63ny+HOpJZsl8T46sXVX1as3h6/DcOR4J 8iUS3ror8gVOPvUrKO9FDRrSVnmWjlWznlVlpEmNv8LxjqyGWxsyaFQRbbBJECz8rYfD xNvw== X-Gm-Message-State: AOAM532DW5OHhe+2ivWK6a/9TgOCSiC3wdtt0B/Bkfb0rgDc+LXdwQQn 04jgXFQcZawjDOEeoBecpZCCdQ== X-Google-Smtp-Source: ABdhPJwdevQTM4sdK9fP5vns0qbYVTO0UmGpDT1jIwWKwkkQ0EzGWmVgIbLDkPYvgbG+ezdTfBOBXQ== X-Received: by 2002:a17:90a:6281:: with SMTP id d1mr37699pjj.231.1594918271685; Thu, 16 Jul 2020 09:51:11 -0700 (PDT) Received: from Smcdef-MBP.lan ([103.136.220.68]) by smtp.gmail.com with ESMTPSA id z11sm5628013pfg.169.2020.07.16.09.51.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 16 Jul 2020 09:51:11 -0700 (PDT) From: Muchun Song To: guro@fb.com, vbabka@suse.cz, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, shakeelb@google.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song Subject: [PATCH v3] mm: memcg/slab: fix memory leak at non-root kmem_cache destroy Date: Fri, 17 Jul 2020 00:51:03 +0800 Message-Id: <20200716165103.83462-1-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) MIME-Version: 1.0 X-Rspamd-Queue-Id: 68DEE1804B654 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the kmem_cache refcount is greater than one, we should not mark the root kmem_cache as dying. If we mark the root kmem_cache dying incorrectly, the non-root kmem_cache can never be destroyed. It resulted in memory leak when memcg was destroyed. We can use the following steps to reproduce. 1) Use kmem_cache_create() to create a new kmem_cache named A. 2) Coincidentally, the kmem_cache A is an alias for kmem_cache B, so the refcount of B is just increased. 3) Use kmem_cache_destroy() to destroy the kmem_cache A, just decrease the B's refcount but mark the B as dying. 4) Create a new memory cgroup and alloc memory from the kmem_cache B. It leads to create a non-root kmem_cache for allocating memory. 5) When destroy the memory cgroup created in the step 4), the non-root kmem_cache can never be destroyed. If we repeat steps 4) and 5), this will cause a lot of memory leak. So only when refcount reach zero, we mark the root kmem_cache as dying. Fixes: 92ee383f6daa ("mm: fix race between kmem_cache destroy, create and deactivate") Signed-off-by: Muchun Song Reviewed-by: Shakeel Butt Acked-by: Roman Gushchin --- changelog in v3: 1) Simplify the code suggested by Roman. changelog in v2: 1) Fix a confusing typo in the commit log. 2) Remove flush_memcg_workqueue() for !CONFIG_MEMCG_KMEM. 3) Introduce a new helper memcg_set_kmem_cache_dying() to fix a race condition between flush_memcg_workqueue() and slab_unmergeable(). mm/slab_common.c | 35 ++++++++++++++++++++++++++++------- 1 file changed, 28 insertions(+), 7 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 37d48a56431d..fe8b68482670 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -326,6 +326,14 @@ int slab_unmergeable(struct kmem_cache *s) if (s->refcount < 0) return 1; +#ifdef CONFIG_MEMCG_KMEM + /* + * Skip the dying kmem_cache. + */ + if (s->memcg_params.dying) + return 1; +#endif + return 0; } @@ -886,12 +894,15 @@ static int shutdown_memcg_caches(struct kmem_cache *s) return 0; } -static void flush_memcg_workqueue(struct kmem_cache *s) +static void memcg_set_kmem_cache_dying(struct kmem_cache *s) { spin_lock_irq(&memcg_kmem_wq_lock); s->memcg_params.dying = true; spin_unlock_irq(&memcg_kmem_wq_lock); +} +static void flush_memcg_workqueue(struct kmem_cache *s) +{ /* * SLAB and SLUB deactivate the kmem_caches through call_rcu. Make * sure all registered rcu callbacks have been invoked. @@ -923,10 +934,6 @@ static inline int shutdown_memcg_caches(struct kmem_cache *s) { return 0; } - -static inline void flush_memcg_workqueue(struct kmem_cache *s) -{ -} #endif /* CONFIG_MEMCG_KMEM */ void slab_kmem_cache_release(struct kmem_cache *s) @@ -944,8 +951,6 @@ void kmem_cache_destroy(struct kmem_cache *s) if (unlikely(!s)) return; - flush_memcg_workqueue(s); - get_online_cpus(); get_online_mems(); @@ -955,6 +960,22 @@ void kmem_cache_destroy(struct kmem_cache *s) if (s->refcount) goto out_unlock; +#ifdef CONFIG_MEMCG_KMEM + memcg_set_kmem_cache_dying(s); + + mutex_unlock(&slab_mutex); + + put_online_mems(); + put_online_cpus(); + + flush_memcg_workqueue(s); + + get_online_cpus(); + get_online_mems(); + + mutex_lock(&slab_mutex); +#endif + err = shutdown_memcg_caches(s); if (!err) err = shutdown_cache(s);