From patchwork Thu Sep 30 07:02:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 12527461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 526DCC433F5 for ; Thu, 30 Sep 2021 07:08:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B7FA9615E5 for ; Thu, 30 Sep 2021 07:08:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B7FA9615E5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2C0AD940092; Thu, 30 Sep 2021 03:08:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2499594003A; Thu, 30 Sep 2021 03:08:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EA18940092; Thu, 30 Sep 2021 03:08:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0232.hostedemail.com [216.40.44.232]) by kanga.kvack.org (Postfix) with ESMTP id F102D94003A for ; Thu, 30 Sep 2021 03:08:10 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8E6CA2932E for ; Thu, 30 Sep 2021 07:08:10 +0000 (UTC) X-FDA: 78643360740.06.903075A Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf11.hostedemail.com (Postfix) with ESMTP id B6FDBF000229 for ; Thu, 30 Sep 2021 07:08:09 +0000 (UTC) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HKkjv6BBhzbh7S; Thu, 30 Sep 2021 15:03:47 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Thu, 30 Sep 2021 15:08:05 +0800 Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Thu, 30 Sep 2021 15:08:04 +0800 From: Kefeng Wang To: , , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , , , Matthew Wilcox CC: Kefeng Wang Subject: [PATCH v3] slub: add back check for free nonslab objects Date: Thu, 30 Sep 2021 15:02:14 +0800 Message-ID: <20210930070214.61499-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: B6FDBF000229 X-Stat-Signature: hzi8omrc9rbzkzjajh9rs55sj95g9gx5 Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=huawei.com; spf=pass (imf11.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com X-Rspamd-Server: rspam06 X-HE-Tag: 1632985689-766668 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After commit ("f227f0faf63b slub: fix unreclaimable slab stat for bulk free"), the check for free nonslab page is replaced by VM_BUG_ON_PAGE, which only check with CONFIG_DEBUG_VM enabled, but this config may impact performance, so it only for debug. Commit ("0937502af7c9 slub: Add check for kfree() of non slab objects.") add the ability, which should be needed in any configs to catch the invalid free, they even could be potential issue, eg, memory corruption, use after free and double free, so replace VM_BUG_ON_PAGE to WARN_ON_ONCE, add object address printing to help use to debug the issue. Signed-off-by: Kefeng Wang --- v3: - use 'once' mechanism sugguested by Shakeel Butt - drop dump_page sugguested by Matthew Wilcox v2: - add object address printing sugguested by Matthew Wilcox mm/slub.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 3d2025f7163b..336eceea0c75 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3513,7 +3513,9 @@ static inline void free_nonslab_page(struct page *page, void *object) { unsigned int order = compound_order(page); - VM_BUG_ON_PAGE(!PageCompound(page), page); + if (WARN_ON_ONCE(!PageCompound(page))) + pr_warn_once("object pointer: 0x%p\n", object); + kfree_hook(object); mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(page, order);