From patchwork Mon Aug 29 07:56:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12957488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BF44ECAAD4 for ; Mon, 29 Aug 2022 07:55:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D4DB794000A; Mon, 29 Aug 2022 03:55:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD519940007; Mon, 29 Aug 2022 03:55:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B7FD594000A; Mon, 29 Aug 2022 03:55:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id A9BC9940007 for ; Mon, 29 Aug 2022 03:55:53 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 79CA81A0299 for ; Mon, 29 Aug 2022 07:55:53 +0000 (UTC) X-FDA: 79851871386.19.62C8CE3 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf20.hostedemail.com (Postfix) with ESMTP id D63D21C0041 for ; Mon, 29 Aug 2022 07:55:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661759752; x=1693295752; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PBPokcQTu2e2E10a31KPFVj+zRJDYVfS2hjU6JguMhY=; b=Nl10dyxyLtE0jo3rqj2TbCDYXCBvUa5DS+KNI+0zm2tfP4RW/nhBremG U7L3vBYIlbO5pXAp3mUfzQBqCYSsj/XKteRprohci0gzDR5KdAdQsdIVC ARa5f5WW+WHSY5g/z3MoRlW35H0VbQk/52zyCq1iD9OHdRgtQw5Si47YH o+gGEL7/8aIDZGRTYxhv9WVzTUNYCKVNDRqQslODQyxqtKnKAzEpl17rO MH3rghksXB98DykdxcEe4flSAv0VhnE+PNsOz7RsE+nrDYIedYWBRKrHL o/87Oiwr5wHhP7o0tWzR4gNvXKZk5PFSfyn8z+TcRhxbFaNkqBcyIiKjL g==; X-IronPort-AV: E=McAfee;i="6500,9779,10453"; a="296111513" X-IronPort-AV: E=Sophos;i="5.93,272,1654585200"; d="scan'208";a="296111513" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Aug 2022 00:55:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,272,1654585200"; d="scan'208";a="672283569" Received: from shbuild999.sh.intel.com ([10.239.147.181]) by fmsmga008.fm.intel.com with ESMTP; 29 Aug 2022 00:55:49 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Dmitry Vyukov Cc: Dave Hansen , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang Subject: [PATCH v4 2/4] mm/slub: only zero the requested size of buffer for kzalloc Date: Mon, 29 Aug 2022 15:56:16 +0800 Message-Id: <20220829075618.69069-3-feng.tang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220829075618.69069-1-feng.tang@intel.com> References: <20220829075618.69069-1-feng.tang@intel.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661759753; a=rsa-sha256; cv=none; b=rqDHb9jJsiv5zx5Jq/xvEROPG7Rtuc0rKIzF1OdfJWb6O/VVWzML33H1guBsO64++kOGqZ 67sng1rpl/9yDbTvLk4u2HmNPj7+fdtQclGKe+3yISErLXB1LfuJdDJSEwOmdawlg7/8Kt kCQ4/jXs4DLGaTbziAZPFwRMtgx++IA= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Nl10dyxy; spf=pass (imf20.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661759753; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=V0aftn1iiZ20B6TeIbgAsAO9UO8+GfrOumdHC2vtiGU=; b=azT9uGnMJfPey5Ex/kQxDaXzVjFWAX0UISHmvtpZci3OP5zGPBmGyibrZlT1Y8oGWmxwFF eSCGA679XbbpvwwlEgVAQPzu1OIPZ0UZ8CVkBsA9bHrjuNHghBzYoceUgV1RtGILb1+noI qBZ4uHH7g9+asAg5o4mPnUqOVB69zMw= X-Rspam-User: X-Rspamd-Queue-Id: D63D21C0041 Authentication-Results: imf20.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Nl10dyxy; spf=pass (imf20.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Stat-Signature: q6aeysg61e7t4txwjtk54ttwae6ps73h X-Rspamd-Server: rspam07 X-HE-Tag: 1661759752-286350 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kzalloc/kmalloc will round up the request size to a fixed size (mostly power of 2), so the allocated memory could be more than requested. Currently kzalloc family APIs will zero all the allocated memory. To detect out-of-bound usage of the extra allocated memory, only zero the requested part, so that sanity check could be added to the extra space later. For kzalloc users who will call ksize() later and utilize this extra space, please be aware that the space is not zeroed any more. Signed-off-by: Feng Tang --- mm/slab.c | 6 +++--- mm/slab.h | 9 +++++++-- mm/slub.c | 6 +++--- 3 files changed, 13 insertions(+), 8 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index a5486ff8362a..73ecaa7066e1 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3253,7 +3253,7 @@ slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, init = slab_want_init_on_alloc(flags, cachep); out: - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); + slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init, 0); return objp; } @@ -3506,13 +3506,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled section. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), 0); /* FIXME: Trace call missing. Christoph would like a bulk variant */ return size; error: local_irq_enable(); cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, 0); kmem_cache_free_bulk(s, i, p); return 0; } diff --git a/mm/slab.h b/mm/slab.h index 65023f000d42..1c773195cfcd 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -720,12 +720,17 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, static inline void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, gfp_t flags, - size_t size, void **p, bool init) + size_t size, void **p, bool init, + unsigned int orig_size) { size_t i; flags &= gfp_allowed_mask; + /* If original request size(kmalloc) is not set, use object_size */ + if (!orig_size) + orig_size = s->object_size; + /* * As memory initialization might be integrated into KASAN, * kasan_slab_alloc and initialization memset must be @@ -736,7 +741,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, for (i = 0; i < size; i++) { p[i] = kasan_slab_alloc(s, p[i], flags, init); if (p[i] && init && !kasan_has_integrated_init()) - memset(p[i], 0, s->object_size); + memset(p[i], 0, orig_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); } diff --git a/mm/slub.c b/mm/slub.c index d8bab650ed99..936b7be0642a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3360,7 +3360,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l init = slab_want_init_on_alloc(gfpflags, s); out: - slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init); + slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init, orig_size); return object; } @@ -3817,11 +3817,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled fastpath loop. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), 0); return i; error: slub_put_cpu_ptr(s->cpu_slab); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, 0); kmem_cache_free_bulk(s, i, p); return 0; }