From patchwork Wed Jul 27 07:10:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12930193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFC3DC19F29 for ; Wed, 27 Jul 2022 07:10:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E0AB6B0085; Wed, 27 Jul 2022 03:10:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B7076B0087; Wed, 27 Jul 2022 03:10:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BC4C6B0088; Wed, 27 Jul 2022 03:10:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 2D5AC6B0085 for ; Wed, 27 Jul 2022 03:10:44 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0A90E1C64EC for ; Wed, 27 Jul 2022 07:10:44 +0000 (UTC) X-FDA: 79732007208.09.56BB147 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf05.hostedemail.com (Postfix) with ESMTP id 5FDD21000A3 for ; Wed, 27 Jul 2022 07:10:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658905843; x=1690441843; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5dtJLggPAAYSrtbRQ0ZMDHXpMfjCXrc3LigkNW2lSbQ=; b=Ba3KL9PJddTLRyeuzz5C5CZxZYxJcDZlapioiq+I5boU8h/pX6gYegVv 9yGARTFUR6VKQBuesvLdbTjL1ITiFSEs5OLtZVacQTrEN/lt6nnNYwzGF jkij5xcdXflT7Lww7qUaW1eIBhq/puyt1tKmuJSR/YFJ5JoDaMRGqW9fI 8PxtNW0TAPyqpXci7xHxn3Zq6dAFCss/8kIkfXF4CsqDp44Vm8JQFoheE HDp5aBk5W/rrhxsB0RJ5c2lAanIstC1edCynkdyOjQD87z2UUmmZxO4cY UBEkLQgYXpV7Nwz4+Z8yfklkXath56VzfH7ivIxQZw5RRhD7n1bC7lJf3 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10420"; a="275038582" X-IronPort-AV: E=Sophos;i="5.93,195,1654585200"; d="scan'208";a="275038582" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2022 00:10:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,195,1654585200"; d="scan'208";a="550737918" Received: from shbuild999.sh.intel.com ([10.239.146.138]) by orsmga003.jf.intel.com with ESMTP; 27 Jul 2022 00:10:39 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Dave Hansen , Robin Murphy , John Garry , Kefeng Wang , Feng Tang Subject: [PATCH v3 1/3] mm/slub: enable debugging memory wasting of kmalloc Date: Wed, 27 Jul 2022 15:10:40 +0800 Message-Id: <20220727071042.8796-2-feng.tang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220727071042.8796-1-feng.tang@intel.com> References: <20220727071042.8796-1-feng.tang@intel.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Ba3KL9PJ; spf=pass (imf05.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658905843; a=rsa-sha256; cv=none; b=zXa4Kjh5P0xaktfkArPnyJjB50svLsg3F22PHW/lvb2YjruOFSWES486E3xel3sxo8DICz cBdAzhMCZQqlyEBZGZjiwoLuH+SHzTm2dpsvKnimLrecK8Qy/UG3Lz+P7tmE4NCjWIJ3RY K9UL5o4rJNFDqxcwiV/mMa7m4OmO9wg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658905843; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AJTaSRb2nqARwj/B955cPXrlaVk3QbEAoqS+KbZxfoM=; b=ibbDr66sG5KS9+fR8aWQ3TnbY9KEmeaT381eQrQ2+fpSHwm/1J0kt7vNHFh15ct+k/biX6 EANge0AiJof7FiufHRXs2L556jvm20rKEYx+2QIsIGk2THFLMhs/iBG/jAV4xwYR9n4lQB 5DgndS2EnVivgKoM/Pir8tyU7FTUA7s= Authentication-Results: imf05.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Ba3KL9PJ; spf=pass (imf05.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5FDD21000A3 X-Stat-Signature: ebah6qjefmisnsothh5r1n3pk9gyy93k X-HE-Tag: 1658905843-738728 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmalloc's API family is critical for mm, with one nature that it will round up the request size to a fixed one (mostly power of 2). Say when user requests memory for '2^n + 1' bytes, actually 2^(n+1) bytes could be allocated, so in worst case, there is around 50% memory space waste. The wastage is not a big issue for requests that get allocated/freed quickly, but may cause problems with objects that have longer life time. We've met a kernel boot OOM panic (v5.10), and from the dumped slab info: [ 26.062145] kmalloc-2k 814056KB 814056KB From debug we found there are huge number of 'struct iova_magazine', whose size is 1032 bytes (1024 + 8), so each allocation will waste 1016 bytes. Though the issue was solved by giving the right (bigger) size of RAM, it is still nice to optimize the size (either use a kmalloc friendly size or create a dedicated slab for it). And from lkml archive, there was another crash kernel OOM case [1] back in 2019, which seems to be related with the similar slab waste situation, as the log is similar: [ 4.332648] iommu: Adding device 0000:20:02.0 to group 16 [ 4.338946] swapper/0 invoked oom-killer: gfp_mask=0x6040c0(GFP_KERNEL|__GFP_COMP), nodemask=(null), order=0, oom_score_adj=0 ... [ 4.857565] kmalloc-2048 59164KB 59164KB The crash kernel only has 256M memory, and 59M is pretty big here. (Note: the related code has been changed and optimised in recent kernel [2], these logs are picked just to demo the problem) So add an way to track each kmalloc's memory waste info, and leverage the existing SLUB debug framework to show its call stack of original allocation, so that user can evaluate the waste situation, identify some hot spots and optimize accordingly, for a better utilization of memory. The waste info is integrated into existing interface: /sys/kernel/debug/slab/kmalloc-xx/alloc_traces, one example of 'kmalloc-4k' after boot is: 126 ixgbe_alloc_q_vector+0xa5/0x4a0 [ixgbe] waste=233856/1856 age=1493302/1493830/1494358 pid=1284 cpus=32 nodes=1 __slab_alloc.isra.86+0x52/0x80 __kmalloc_node+0x143/0x350 ixgbe_alloc_q_vector+0xa5/0x4a0 [ixgbe] ixgbe_init_interrupt_scheme+0x1a6/0x730 [ixgbe] ixgbe_probe+0xc8e/0x10d0 [ixgbe] local_pci_probe+0x42/0x80 work_for_cpu_fn+0x13/0x20 process_one_work+0x1c5/0x390 which means in 'kmalloc-4k' slab, there are 126 requests of 2240 bytes which got a 4KB space (wasting 1856 bytes each and 233856 bytes in total). And when system starts some real workload like multiple docker instances, there are more severe waste. [1]. https://lkml.org/lkml/2019/8/12/266 [2]. https://lore.kernel.org/lkml/2920df89-9975-5785-f79b-257d3052dfaf@huawei.com/ [Thanks Hyeonggon for pointing out several bugs about sorting/format] [Thanks Vlastimil for suggesting way to reduce memory usage of orig_size and keep it only for kmalloc objects] Signed-off-by: Feng Tang --- include/linux/slab.h | 2 + mm/slub.c | 99 ++++++++++++++++++++++++++++++++++++-------- 2 files changed, 83 insertions(+), 18 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 0fefdf528e0d..a713b0e5bbcd 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -29,6 +29,8 @@ #define SLAB_RED_ZONE ((slab_flags_t __force)0x00000400U) /* DEBUG: Poison objects */ #define SLAB_POISON ((slab_flags_t __force)0x00000800U) +/* Indicate a kmalloc slab */ +#define SLAB_KMALLOC ((slab_flags_t __force)0x00001000U) /* Align objs on cache lines */ #define SLAB_HWCACHE_ALIGN ((slab_flags_t __force)0x00002000U) /* Use GFP_DMA memory */ diff --git a/mm/slub.c b/mm/slub.c index 862dbd9af4f5..2e046cc10b84 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -191,6 +191,12 @@ static inline bool kmem_cache_debug(struct kmem_cache *s) return kmem_cache_debug_flags(s, SLAB_DEBUG_FLAGS); } +static inline bool slub_debug_orig_size(struct kmem_cache *s) +{ + return (kmem_cache_debug_flags(s, SLAB_STORE_USER) && + (s->flags & SLAB_KMALLOC)); +} + void *fixup_red_left(struct kmem_cache *s, void *p) { if (kmem_cache_debug_flags(s, SLAB_RED_ZONE)) @@ -816,6 +822,33 @@ static void print_slab_info(const struct slab *slab) folio_flags(folio, 0)); } +static inline void set_orig_size(struct kmem_cache *s, + void *object, unsigned int orig_size) +{ + void *p = kasan_reset_tag(object); + + if (!slub_debug_orig_size(s)) + return; + + p += get_info_end(s); + p += sizeof(struct track) * 2; + + *(unsigned int *)p = orig_size; +} + +static unsigned int get_orig_size(struct kmem_cache *s, void *object) +{ + void *p = kasan_reset_tag(object); + + if (!slub_debug_orig_size(s)) + return s->object_size; + + p += get_info_end(s); + p += sizeof(struct track) * 2; + + return *(unsigned int *)p; +} + static void slab_bug(struct kmem_cache *s, char *fmt, ...) { struct va_format vaf; @@ -875,6 +908,9 @@ static void print_trailer(struct kmem_cache *s, struct slab *slab, u8 *p) if (s->flags & SLAB_STORE_USER) off += 2 * sizeof(struct track); + if (slub_debug_orig_size(s)) + off += sizeof(unsigned int); + off += kasan_metadata_size(s); if (off != size_from_object(s)) @@ -1026,10 +1062,14 @@ static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p) { unsigned long off = get_info_end(s); /* The end of info */ - if (s->flags & SLAB_STORE_USER) + if (s->flags & SLAB_STORE_USER) { /* We also have user information there */ off += 2 * sizeof(struct track); + if (s->flags & SLAB_KMALLOC) + off += sizeof(unsigned int); + } + off += kasan_metadata_size(s); if (size_from_object(s) == off) @@ -1325,7 +1365,8 @@ static inline int alloc_consistency_checks(struct kmem_cache *s, static noinline int alloc_debug_processing(struct kmem_cache *s, struct slab *slab, - void *object, unsigned long addr) + void *object, unsigned long addr, + unsigned int orig_size) { if (s->flags & SLAB_CONSISTENCY_CHECKS) { if (!alloc_consistency_checks(s, slab, object)) @@ -1335,6 +1376,9 @@ static noinline int alloc_debug_processing(struct kmem_cache *s, /* Success perform special debug activities for allocs */ if (s->flags & SLAB_STORE_USER) set_track(s, object, TRACK_ALLOC, addr); + + set_orig_size(s, object, orig_size); + trace(s, slab, object, 1); init_object(s, object, SLUB_RED_ACTIVE); return 1; @@ -1661,7 +1705,8 @@ static inline void setup_slab_debug(struct kmem_cache *s, struct slab *slab, void *addr) {} static inline int alloc_debug_processing(struct kmem_cache *s, - struct slab *slab, void *object, unsigned long addr) { return 0; } + struct slab *slab, void *object, unsigned long addr, + unsigned int orig_size) { return 0; } static inline int free_debug_processing( struct kmem_cache *s, struct slab *slab, @@ -2905,7 +2950,7 @@ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) * already disabled (which is the case for bulk allocation). */ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, - unsigned long addr, struct kmem_cache_cpu *c) + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size) { void *freelist; struct slab *slab; @@ -3048,7 +3093,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, check_new_slab: if (kmem_cache_debug(s)) { - if (!alloc_debug_processing(s, slab, freelist, addr)) { + if (!alloc_debug_processing(s, slab, freelist, addr, orig_size)) { /* Slab failed checks. Next slab needed */ goto new_slab; } else { @@ -3102,7 +3147,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, * pointer. */ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, - unsigned long addr, struct kmem_cache_cpu *c) + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size) { void *p; @@ -3115,7 +3160,7 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, c = slub_get_cpu_ptr(s->cpu_slab); #endif - p = ___slab_alloc(s, gfpflags, node, addr, c); + p = ___slab_alloc(s, gfpflags, node, addr, c, orig_size); #ifdef CONFIG_PREEMPT_COUNT slub_put_cpu_ptr(s->cpu_slab); #endif @@ -3206,7 +3251,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l */ if (IS_ENABLED(CONFIG_PREEMPT_RT) || unlikely(!object || !slab || !node_match(slab, node))) { - object = __slab_alloc(s, gfpflags, node, addr, c); + object = __slab_alloc(s, gfpflags, node, addr, c, orig_size); } else { void *next_object = get_freepointer_safe(s, object); @@ -3709,7 +3754,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * of re-populating per CPU c->freelist */ p[i] = ___slab_alloc(s, flags, NUMA_NO_NODE, - _RET_IP_, c); + _RET_IP_, c, s->object_size); if (unlikely(!p[i])) goto error; @@ -4112,12 +4157,17 @@ static int calculate_sizes(struct kmem_cache *s) } #ifdef CONFIG_SLUB_DEBUG - if (flags & SLAB_STORE_USER) + if (flags & SLAB_STORE_USER) { /* * Need to store information about allocs and frees after * the object. */ size += 2 * sizeof(struct track); + + /* Save the original kmalloc request size */ + if (flags & SLAB_KMALLOC) + size += sizeof(unsigned int); + } #endif kasan_cache_create(s, &size, &s->flags); @@ -4842,7 +4892,7 @@ void __init kmem_cache_init(void) /* Now we can use the kmem_cache to allocate kmalloc slabs */ setup_kmalloc_cache_index_table(); - create_kmalloc_caches(0); + create_kmalloc_caches(SLAB_KMALLOC); /* Setup random freelists for each cache */ init_freelist_randomization(); @@ -5068,6 +5118,7 @@ struct location { depot_stack_handle_t handle; unsigned long count; unsigned long addr; + unsigned long waste; long long sum_time; long min_time; long max_time; @@ -5114,13 +5165,15 @@ static int alloc_loc_track(struct loc_track *t, unsigned long max, gfp_t flags) } static int add_location(struct loc_track *t, struct kmem_cache *s, - const struct track *track) + const struct track *track, + unsigned int orig_size) { long start, end, pos; struct location *l; - unsigned long caddr, chandle; + unsigned long caddr, chandle, cwaste; unsigned long age = jiffies - track->when; depot_stack_handle_t handle = 0; + unsigned int waste = s->object_size - orig_size; #ifdef CONFIG_STACKDEPOT handle = READ_ONCE(track->handle); @@ -5138,11 +5191,13 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, if (pos == end) break; - caddr = t->loc[pos].addr; - chandle = t->loc[pos].handle; - if ((track->addr == caddr) && (handle == chandle)) { + l = &t->loc[pos]; + caddr = l->addr; + chandle = l->handle; + cwaste = l->waste; + if ((track->addr == caddr) && (handle == chandle) && + (waste == cwaste)) { - l = &t->loc[pos]; l->count++; if (track->when) { l->sum_time += age; @@ -5167,6 +5222,9 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, end = pos; else if (track->addr == caddr && handle < chandle) end = pos; + else if (track->addr == caddr && handle == chandle && + waste < cwaste) + end = pos; else start = pos; } @@ -5190,6 +5248,7 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, l->min_pid = track->pid; l->max_pid = track->pid; l->handle = handle; + l->waste = waste; cpumask_clear(to_cpumask(l->cpus)); cpumask_set_cpu(track->cpu, to_cpumask(l->cpus)); nodes_clear(l->nodes); @@ -5208,7 +5267,7 @@ static void process_slab(struct loc_track *t, struct kmem_cache *s, for_each_object(p, s, addr, slab->objects) if (!test_bit(__obj_to_index(s, addr, p), obj_map)) - add_location(t, s, get_track(s, p, alloc)); + add_location(t, s, get_track(s, p, alloc), get_orig_size(s, p)); } #endif /* CONFIG_DEBUG_FS */ #endif /* CONFIG_SLUB_DEBUG */ @@ -6078,6 +6137,10 @@ static int slab_debugfs_show(struct seq_file *seq, void *v) else seq_puts(seq, ""); + if (l->waste) + seq_printf(seq, " waste=%lu/%lu", + l->count * l->waste, l->waste); + if (l->sum_time != l->min_time) { seq_printf(seq, " age=%ld/%llu/%ld", l->min_time, div_u64(l->sum_time, l->count), From patchwork Wed Jul 27 07:10:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12930194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3D7CC3F6B0 for ; Wed, 27 Jul 2022 07:10:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 289366B0087; Wed, 27 Jul 2022 03:10:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 20EAF6B0088; Wed, 27 Jul 2022 03:10:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 087C96B0089; Wed, 27 Jul 2022 03:10:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id EF10F6B0087 for ; Wed, 27 Jul 2022 03:10:47 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id CD71F1A0B89 for ; Wed, 27 Jul 2022 07:10:47 +0000 (UTC) X-FDA: 79732007334.19.C626589 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf05.hostedemail.com (Postfix) with ESMTP id 115C81000A4 for ; Wed, 27 Jul 2022 07:10:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658905847; x=1690441847; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=69y2qTj7SDB6fNmDi4fGWqVS2niql+kdyFzN631S5jw=; b=IjR3je4GK0jVSdT9ZXIkwhiObI8hveplZ/T9IBkb53QZ7Yg/1XgzQ9cd L7V5HoxdV2NKRXa7AFIvJ7aWph1o3xbQa8+6S6+s53aj/47sImv3zKXnO fHDQ3oYKyx7EATOCzWenS/M6owAWv4fW6YwcQjw5VCsQW0rmm140RRtqs S4i26QXNoiMgeap8vvEt9rb7MSh9OeG4kLH2Q59bhQM9odpQ14G1hiag+ 0SAP4PYnSrlov1V/2S8q15ED6DWF/m5r+HAit0xGxccJ4wIb7nQT2tAO9 gVLbEpfaCivDct6SkZumQ8TMZhi1kFJQlIa0YFOxNs3M0Y1BTFFgoORje g==; X-IronPort-AV: E=McAfee;i="6400,9594,10420"; a="275038605" X-IronPort-AV: E=Sophos;i="5.93,195,1654585200"; d="scan'208";a="275038605" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2022 00:10:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,195,1654585200"; d="scan'208";a="550737930" Received: from shbuild999.sh.intel.com ([10.239.146.138]) by orsmga003.jf.intel.com with ESMTP; 27 Jul 2022 00:10:42 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Dave Hansen , Robin Murphy , John Garry , Kefeng Wang , Feng Tang Subject: [PATCH v3 2/3] mm/slub: only zero the requested size of buffer for kzalloc Date: Wed, 27 Jul 2022 15:10:41 +0800 Message-Id: <20220727071042.8796-3-feng.tang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220727071042.8796-1-feng.tang@intel.com> References: <20220727071042.8796-1-feng.tang@intel.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=IjR3je4G; spf=pass (imf05.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658905847; a=rsa-sha256; cv=none; b=Ou3aW+uFFp9uSSfZJBdKpTh0ABvRnK1hnPOw5t3zKeLyuc/7oyjboclAfE9hdDr6vWMbPr GL/2VfbWzIjkL8QVNIT9QFBfj2YU/qgtHipoyCPsJ8w+vKw9fL1GF5/RPu23oy3IkXVlOi wtbSQME2RuPWV03ghFC3cg7JEzGThP4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658905847; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fEfWdfPrOT3/+EdhGveCrcjlPAQNbmw4do6zASsPOg0=; b=eiwE1o0KpogGSFLMxxYBxhwBqvWi13518uvaGKdWigbCJLD3BNNK6wHWLKr3RzJss9uae3 aut3G2MlnNqBZgTIvjBLbibyRBQZQPgHQ478orC5D4WYbOoYysDKTKLrDvfEiaKDQWodbl YgHf2JtwfGQVPrlZIeY7w3NtJuWfLqY= Authentication-Results: imf05.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=IjR3je4G; spf=pass (imf05.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 115C81000A4 X-Stat-Signature: f4grye1kx14dcw7zmy8c3pmnzagbzgdn X-HE-Tag: 1658905846-819125 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kzalloc/kmalloc will round up the request size to a fixed size (mostly power of 2), so the allocated memory could be more than requested. Currently kzalloc family APIs will zero all the allocated memory. To detect out-of-bound usage of the extra allocated memory, only zero the requested part, so that sanity check could be added to the extra space later. For kzalloc users who will call ksize() later and utilize this extra space, please be aware that the space is not zeroed any more. Signed-off-by: Feng Tang --- mm/slab.c | 8 ++++---- mm/slab.h | 9 +++++++-- mm/slub.c | 6 +++--- 3 files changed, 14 insertions(+), 9 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 5e73e2d80222..771f7c57d3ef 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3236,7 +3236,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_ init = slab_want_init_on_alloc(flags, cachep); out_hooks: - slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init); + slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr, init, 0); return ptr; } @@ -3299,7 +3299,7 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags, init = slab_want_init_on_alloc(flags, cachep); out: - slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init); + slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init, 0); return objp; } @@ -3546,13 +3546,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled section. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), 0); /* FIXME: Trace call missing. Christoph would like a bulk variant */ return size; error: local_irq_enable(); cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, 0); kmem_cache_free_bulk(s, i, p); return 0; } diff --git a/mm/slab.h b/mm/slab.h index 4ec82bec15ec..7b53868efe6d 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -710,12 +710,17 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, static inline void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, gfp_t flags, - size_t size, void **p, bool init) + size_t size, void **p, bool init, + unsigned int orig_size) { size_t i; flags &= gfp_allowed_mask; + /* If original request size(kmalloc) is not set, use object_size */ + if (!orig_size) + orig_size = s->object_size; + /* * As memory initialization might be integrated into KASAN, * kasan_slab_alloc and initialization memset must be @@ -726,7 +731,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, for (i = 0; i < size; i++) { p[i] = kasan_slab_alloc(s, p[i], flags, init); if (p[i] && init && !kasan_has_integrated_init()) - memset(p[i], 0, s->object_size); + memset(p[i], 0, orig_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); } diff --git a/mm/slub.c b/mm/slub.c index 2e046cc10b84..946919066a4b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3285,7 +3285,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l init = slab_want_init_on_alloc(gfpflags, s); out: - slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init); + slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init, orig_size); return object; } @@ -3778,11 +3778,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * Done outside of the IRQ disabled fastpath loop. */ slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s)); + slab_want_init_on_alloc(flags, s), 0); return i; error: slub_put_cpu_ptr(s->cpu_slab); - slab_post_alloc_hook(s, objcg, flags, i, p, false); + slab_post_alloc_hook(s, objcg, flags, i, p, false, 0); kmem_cache_free_bulk(s, i, p); return 0; } From patchwork Wed Jul 27 07:10:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12930195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A0E2C04A68 for ; Wed, 27 Jul 2022 07:10:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9FFF16B0088; Wed, 27 Jul 2022 03:10:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B1E56B0089; Wed, 27 Jul 2022 03:10:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82952940007; Wed, 27 Jul 2022 03:10:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 732726B0088 for ; Wed, 27 Jul 2022 03:10:51 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4D838A0B42 for ; Wed, 27 Jul 2022 07:10:51 +0000 (UTC) X-FDA: 79732007502.10.B397084 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf05.hostedemail.com (Postfix) with ESMTP id A56F910009F for ; Wed, 27 Jul 2022 07:10:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658905850; x=1690441850; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=U2DUaLw+/DEgR8kITGEVz4V5R9QpdataO3f86cOl0eM=; b=D0OKeBL8OuUQCR9V1lE1u9lPs/5+BTKQRbGnfoEurmpnS0dXfraUWNDs edUJuBQ880158gaFH5wmfQygdKfN9lt+3nYWz9ymRQ6niZ+8kXUPEYN34 PrK0SGaNKeOXZsi4JtlsSfFuIZXekgDK4fNz73NUh45oby5OBaDtfu7WX Gk03JsWjxzjzSg84WNYlR/vFgznlnKsasNKx2sgkwwV4HrzDNlI68daf2 zMaDagpwVClQa3ARzyKhVy6wFi0mftmw25UG5JwHhV9ZS/6QOQZCQFkqi mLm48tFrebWxLIMTGGRc6qpfy0kzb+a/AprwP0BYlzzw3M/sI2i19enVd A==; X-IronPort-AV: E=McAfee;i="6400,9594,10420"; a="275038623" X-IronPort-AV: E=Sophos;i="5.93,195,1654585200"; d="scan'208";a="275038623" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2022 00:10:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,195,1654585200"; d="scan'208";a="550737956" Received: from shbuild999.sh.intel.com ([10.239.146.138]) by orsmga003.jf.intel.com with ESMTP; 27 Jul 2022 00:10:46 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Dave Hansen , Robin Murphy , John Garry , Kefeng Wang , Feng Tang Subject: [PATCH v3 3/3] mm/slub: extend redzone check to cover extra allocated kmalloc space than requested Date: Wed, 27 Jul 2022 15:10:42 +0800 Message-Id: <20220727071042.8796-4-feng.tang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220727071042.8796-1-feng.tang@intel.com> References: <20220727071042.8796-1-feng.tang@intel.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=D0OKeBL8; spf=pass (imf05.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658905851; a=rsa-sha256; cv=none; b=wber2le6nhkTqNVZesfuGlTc8/T4DZ/MbWS5CN1rXw6RXtAxYKLXyT6tDTN2mx5cDCLnqI fHHFNzBT564DIoriNjugLXBEd5Ev7iLW8e4E0GHPUT9R5cZNAAfRWNqyHTej6NgV8FE22M T65D52LVkpU5ALWCz859Y8ypcvxQOF4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658905851; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yxKi1anUxD7n+42UBLlNvp3kQerGUqLu7DRiZIqhZnE=; b=FtIcltAkve8i+Nqrs5oBE5EqT/nmA8uvuxHO2J5IS7I7HH/BVLnHs4FdP9EeowjLTaZyia dC08MqiGPLyhuS0x7LCH8Jo73ri1NuEMgXFJO1JiHUtz9PpK/p1PTr5cT6Plcu7MFzCVCz zN7Z7zpQfGuFShJ5UZT4Qa0uLr9OtXM= Authentication-Results: imf05.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=D0OKeBL8; spf=pass (imf05.hostedemail.com: domain of feng.tang@intel.com designates 134.134.136.20 as permitted sender) smtp.mailfrom=feng.tang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A56F910009F X-Stat-Signature: rj71hrdohu7wkf7yzbahcik5jbb53j4h X-HE-Tag: 1658905850-324186 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmalloc will round up the request size to a fixed size (mostly power of 2), so there could be a extra space than what is requested, whose size is the actual buffer size minus original request size. To better detect out of bound access or abuse of this space, add redzone sanity check for it. And in current kernel, some kmalloc user already knows the existence of the space and utilizes it after calling 'ksize()' to know the real size of the allocated buffer. So we skip the sanity check for objects which have been called with ksize(), as treating them as legitimate users. Suggested-by: Vlastimil Babka Signed-off-by: Feng Tang Reported-by: kernel test robot Reported-by: kernel test robot Signed-off-by: Feng Tang Acked-by: Dmitry Vyukov --- mm/slub.c | 52 +++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 49 insertions(+), 3 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 946919066a4b..added2653bb0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -836,6 +836,11 @@ static inline void set_orig_size(struct kmem_cache *s, *(unsigned int *)p = orig_size; } +static inline void skip_orig_size_check(struct kmem_cache *s, const void *object) +{ + set_orig_size(s, (void *)object, s->object_size); +} + static unsigned int get_orig_size(struct kmem_cache *s, void *object) { void *p = kasan_reset_tag(object); @@ -967,13 +972,35 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct slab *slab, static void init_object(struct kmem_cache *s, void *object, u8 val) { u8 *p = kasan_reset_tag(object); + unsigned int orig_size = s->object_size; - if (s->flags & SLAB_RED_ZONE) + if (s->flags & SLAB_RED_ZONE) { memset(p - s->red_left_pad, val, s->red_left_pad); + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { + unsigned int zone_start; + + orig_size = get_orig_size(s, object); + zone_start = orig_size; + + if (!freeptr_outside_object(s)) + zone_start = max_t(unsigned int, orig_size, + s->offset + sizeof(void *)); + + /* + * Redzone the extra allocated space by kmalloc + * than requested. + */ + if (zone_start < s->object_size) + memset(p + zone_start, val, + s->object_size - zone_start); + } + + } + if (s->flags & __OBJECT_POISON) { - memset(p, POISON_FREE, s->object_size - 1); - p[s->object_size - 1] = POISON_END; + memset(p, POISON_FREE, orig_size - 1); + p[orig_size - 1] = POISON_END; } if (s->flags & SLAB_RED_ZONE) @@ -1120,6 +1147,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab, { u8 *p = object; u8 *endobject = object + s->object_size; + unsigned int orig_size; if (s->flags & SLAB_RED_ZONE) { if (!check_bytes_and_report(s, slab, object, "Left Redzone", @@ -1129,6 +1157,20 @@ static int check_object(struct kmem_cache *s, struct slab *slab, if (!check_bytes_and_report(s, slab, object, "Right Redzone", endobject, val, s->inuse - s->object_size)) return 0; + + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { + orig_size = get_orig_size(s, object); + + if (!freeptr_outside_object(s)) + orig_size = max_t(unsigned int, orig_size, + s->offset + sizeof(void *)); + if (s->object_size > orig_size && + !check_bytes_and_report(s, slab, object, + "kmalloc Redzone", p + orig_size, + val, s->object_size - orig_size)) { + return 0; + } + } } else { if ((s->flags & SLAB_POISON) && s->object_size < s->inuse) { check_bytes_and_report(s, slab, p, "Alignment padding", @@ -4588,6 +4630,10 @@ size_t __ksize(const void *object) if (unlikely(!folio_test_slab(folio))) return folio_size(folio); +#ifdef CONFIG_SLUB_DEBUG + skip_orig_size_check(folio_slab(folio)->slab_cache, object); +#endif + return slab_ksize(folio_slab(folio)->slab_cache); } EXPORT_SYMBOL(__ksize);