From patchwork Tue Nov 2 11:43:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunfeng Ye X-Patchwork-Id: 12598595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECBCFC433F5 for ; Tue, 2 Nov 2021 11:43:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8171860EE3 for ; Tue, 2 Nov 2021 11:43:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8171860EE3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2350794000F; Tue, 2 Nov 2021 07:43:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E4BF940009; Tue, 2 Nov 2021 07:43:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0ACFD94000F; Tue, 2 Nov 2021 07:43:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0019.hostedemail.com [216.40.44.19]) by kanga.kvack.org (Postfix) with ESMTP id F0821940009 for ; Tue, 2 Nov 2021 07:43:09 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9BEDE184796AA for ; Tue, 2 Nov 2021 11:43:09 +0000 (UTC) X-FDA: 78763804098.07.C7C5427 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf20.hostedemail.com (Postfix) with ESMTP id 74FFDD0000BA for ; Tue, 2 Nov 2021 11:43:00 +0000 (UTC) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Hk7FR6KNyzbhXr; Tue, 2 Nov 2021 19:38:19 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Tue, 2 Nov 2021 19:43:01 +0800 Received: from [10.174.176.231] (10.174.176.231) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Tue, 2 Nov 2021 19:43:01 +0800 To: , , , , Andrew Morton , , , CC: , , , , , Hewenliang From: Yunfeng Ye Subject: [PATCH v2] mm, slub: emit the "free" trace report before freeing memory in kmem_cache_free() Message-ID: Date: Tue, 2 Nov 2021 19:43:00 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 Content-Language: en-US X-Originating-IP: [10.174.176.231] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 74FFDD0000BA X-Stat-Signature: 1dntmw6yyunqw3no196sfumc5re84rc5 Authentication-Results: imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of yeyunfeng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=yeyunfeng@huawei.com; dmarc=pass (policy=none) header.from=huawei.com X-Rspamd-Server: rspam02 X-HE-Tag: 1635853380-921543 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After the memory is freed, it can be immediately allocated by other CPUs, before the "free" trace report has been emitted. This causes inaccurate traces. For example, if the following sequence of events occurs: CPU 0 CPU 1 (1) alloc xxxxxx (2) free xxxxxx (3) alloc xxxxxx (4) free xxxxxx Then they will be inaccurately reported via tracing, so that they appear to have happened in this order: CPU 0 CPU 1 (1) alloc xxxxxx (2) alloc xxxxxx (3) free xxxxxx (4) free xxxxxx This makes it look like CPU 1 somehow managed to allocate mmemory that CPU 0 still had allocated for itself. In order to avoid this, emit the "free xxxxxx" tracing report just before the actual call to free the memory, instead of just after it. Signed-off-by: Yunfeng Ye Reviewed-by: Vlastimil Babka Reviewed-by: John Hubbard --- v1 -> v2: - Modify the description - Add "Reviewed-by" mm/slub.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 432145d7b4ec..427e62034c3f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x) s = cache_from_obj(s, x); if (!s) return; - slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_); trace_kmem_cache_free(_RET_IP_, x, s->name); + slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_); } EXPORT_SYMBOL(kmem_cache_free);