From patchwork Wed Nov 3 04:10:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunfeng Ye X-Patchwork-Id: 12600053 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 994BCC433EF for ; Wed, 3 Nov 2021 04:10:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0E83E6109F for ; Wed, 3 Nov 2021 04:10:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0E83E6109F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 645896B006C; Wed, 3 Nov 2021 00:10:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F5EB6B0071; Wed, 3 Nov 2021 00:10:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E3F3940007; Wed, 3 Nov 2021 00:10:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0246.hostedemail.com [216.40.44.246]) by kanga.kvack.org (Postfix) with ESMTP id 3E81F6B006C for ; Wed, 3 Nov 2021 00:10:28 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E60B418473E27 for ; Wed, 3 Nov 2021 04:10:27 +0000 (UTC) X-FDA: 78766292094.06.D09F670 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf01.hostedemail.com (Postfix) with ESMTP id 05178508D4FF for ; Wed, 3 Nov 2021 04:10:15 +0000 (UTC) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4HkYDP0GZgzTgDM; Wed, 3 Nov 2021 12:08:53 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 3 Nov 2021 12:10:21 +0800 Received: from [10.174.176.231] (10.174.176.231) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 3 Nov 2021 12:10:20 +0800 From: Yunfeng Ye Subject: [PATCH v3] mm: emit the "free" trace report before freeing memory in kmem_cache_free() To: , , , , Andrew Morton , , , CC: , , , Tang Yizhou , Vlastimil Babka , , Message-ID: <374eb75d-7404-8721-4e1e-65b0e5b17279@huawei.com> Date: Wed, 3 Nov 2021 12:10:20 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 Content-Language: en-US X-Originating-IP: [10.174.176.231] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 05178508D4FF X-Stat-Signature: ufu9tk478t97i3nf5qmqxdh9ycwnnh8q Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of yeyunfeng@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=yeyunfeng@huawei.com X-HE-Tag: 1635912615-457303 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After the memory is freed, it can be immediately allocated by other CPUs, before the "free" trace report has been emitted. This causes inaccurate traces. For example, if the following sequence of events occurs: CPU 0 CPU 1 (1) alloc xxxxxx (2) free xxxxxx (3) alloc xxxxxx (4) free xxxxxx Then they will be inaccurately reported via tracing, so that they appear to have happened in this order: CPU 0 CPU 1 (1) alloc xxxxxx (2) alloc xxxxxx (3) free xxxxxx (4) free xxxxxx This makes it look like CPU 1 somehow managed to allocate memory that CPU 0 still had allocated for itself. In order to avoid this, emit the "free xxxxxx" tracing report just before the actual call to free the memory, instead of just after it. Signed-off-by: Yunfeng Ye Reviewed-by: Vlastimil Babka Reviewed-by: John Hubbard --- v2 -> v3: - Fix the typo "mmemory" to "memory" - Add "Reviewed-by" - Fix the same problem in slab/slob v1 -> v2: - Modify the description - Add "Reviewed-by" mm/slab.c | 3 +-- mm/slob.c | 3 +-- mm/slub.c | 2 +- 3 files changed, 3 insertions(+), 5 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index da132a9ae6f8..ca4822f6b2b6 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3733,14 +3733,13 @@ void kmem_cache_free(struct kmem_cache *cachep, void *objp) if (!cachep) return; + trace_kmem_cache_free(_RET_IP_, objp, cachep->name); local_irq_save(flags); debug_check_no_locks_freed(objp, cachep->object_size); if (!(cachep->flags & SLAB_DEBUG_OBJECTS)) debug_check_no_obj_freed(objp, cachep->object_size); __cache_free(cachep, objp, _RET_IP_); local_irq_restore(flags); - - trace_kmem_cache_free(_RET_IP_, objp, cachep->name); } EXPORT_SYMBOL(kmem_cache_free); diff --git a/mm/slob.c b/mm/slob.c index 74d3f6e60666..03deee1e6a94 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -666,6 +666,7 @@ static void kmem_rcu_free(struct rcu_head *head) void kmem_cache_free(struct kmem_cache *c, void *b) { kmemleak_free_recursive(b, c->flags); + trace_kmem_cache_free(_RET_IP_, b, c->name); if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) { struct slob_rcu *slob_rcu; slob_rcu = b + (c->size - sizeof(struct slob_rcu)); @@ -674,8 +675,6 @@ void kmem_cache_free(struct kmem_cache *c, void *b) } else { __kmem_cache_free(b, c->size); } - - trace_kmem_cache_free(_RET_IP_, b, c->name); } EXPORT_SYMBOL(kmem_cache_free); diff --git a/mm/slub.c b/mm/slub.c index 432145d7b4ec..427e62034c3f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x) s = cache_from_obj(s, x); if (!s) return; - slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_); trace_kmem_cache_free(_RET_IP_, x, s->name); + slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free);