From patchwork Wed Sep 7 07:10:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 12968512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 611E5ECAAD3 for ; Wed, 7 Sep 2022 07:11:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D4F1C8D0002; Wed, 7 Sep 2022 03:11:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD62A6B007B; Wed, 7 Sep 2022 03:11:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B76F08D0002; Wed, 7 Sep 2022 03:11:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id AA1326B0078 for ; Wed, 7 Sep 2022 03:11:05 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 58713140CC8 for ; Wed, 7 Sep 2022 07:11:05 +0000 (UTC) X-FDA: 79884417690.06.BEC2C7D Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf19.hostedemail.com (Postfix) with ESMTP id 8A7DE1A005A for ; Wed, 7 Sep 2022 07:11:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662534664; x=1694070664; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1c9cbfkzs9n0oEdrO+KeaRMAwrvccmd6pS64JbbS08A=; b=ABfYnOSuH/BoqQTNj1zMyR9NF10wHXX9PcnawCckdu5wuuyw3PGa0JoU itVTCkklrPqFQHpGrCOR0wgfvio1fcFIpYBH5q8cliN5i4y99SaaytLMb C6HE/bMXSkBl1VZ5xSf86I0ZqB5aYgGUyq2qJgaOKQTQYR38XxTinDDRS FJdbos2YDxfdxMwQ499mAlb1BjMUo0ZHzzEvO/LMHD/8fsgCMLU4flP0R XFsmMAxbUNDtL1G5YRb1Spu+Qq8xlC4ZXYVDStDqsGQDCGFZIM2SvIKa1 XiL/7JanMLsRWByU8ehgx9keBEPHHcep2+ImAqigaS8IqSxrXiwRIoQAT w==; X-IronPort-AV: E=McAfee;i="6500,9779,10462"; a="283798298" X-IronPort-AV: E=Sophos;i="5.93,296,1654585200"; d="scan'208";a="283798298" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Sep 2022 00:11:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,296,1654585200"; d="scan'208";a="676053530" Received: from feng-clx.sh.intel.com ([10.238.200.228]) by fmsmga008.fm.intel.com with ESMTP; 07 Sep 2022 00:11:00 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Dmitry Vyukov , Jonathan Corbet Cc: Dave Hansen , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Feng Tang Subject: [PATCH v5 4/4] mm/slub: extend redzone check to extra allocated kmalloc space than requested Date: Wed, 7 Sep 2022 15:10:23 +0800 Message-Id: <20220907071023.3838692-5-feng.tang@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220907071023.3838692-1-feng.tang@intel.com> References: <20220907071023.3838692-1-feng.tang@intel.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=ABfYnOSu; spf=softfail (imf19.hostedemail.com: 134.134.136.20 is neither permitted nor denied by domain of feng.tang@intel.com) smtp.mailfrom=feng.tang@intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662534665; a=rsa-sha256; cv=none; b=KOL1FuK3L3UuQEtMwfaQZbsrtYjCpKAjl8PomTqTeaCmraOE/2XyoP1SNCqdkbUfU8a+tR 1Y/7ClGdaKf588mi8DRtZgtFn8rhmOnwVrHGFS18b9IGQN4Ge4xUkMQeakKu3M68z8Ce1u 1nTXzLotLAW7E0QSFoxIgxylYB+ivpc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662534665; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=A7VbBWQ1FaID1ZOWc911qvZ0dgLNnXtOgQ9HgLyFnzo=; b=AqFApfQ8DvV2kx56j7xBfAEWDPXlmLreB+2OMIatsUPJZRJ2EiyYRIvCW9s+DqJnE2SChP XCxMMtdUR+gKpLHQMhA2x0lOhMs53fdNJXlv9OzdZnmC/KMP5GnsN6Q1HUkoQ/WsdmfvHk nb280Er+1qX3LhCJ6Cr5z0GpWpmjVvw= Authentication-Results: imf19.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=ABfYnOSu; spf=softfail (imf19.hostedemail.com: 134.134.136.20 is neither permitted nor denied by domain of feng.tang@intel.com) smtp.mailfrom=feng.tang@intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) X-Stat-Signature: mqy6eucb9jq8mzgmjtau1zzsqynu59w4 X-Rspamd-Queue-Id: 8A7DE1A005A X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1662534664-719382 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmalloc will round up the request size to a fixed size (mostly power of 2), so there could be a extra space than what is requested, whose size is the actual buffer size minus original request size. To better detect out of bound access or abuse of this space, add redzone sanity check for it. And in current kernel, some kmalloc user already knows the existence of the space and utilizes it after calling 'ksize()' to know the real size of the allocated buffer. So we skip the sanity check for objects which have been called with ksize(), as treating them as legitimate users. Suggested-by: Vlastimil Babka Signed-off-by: Feng Tang --- mm/slab.h | 4 ++++ mm/slab_common.c | 4 ++++ mm/slub.c | 57 +++++++++++++++++++++++++++++++++++++++++++++--- 3 files changed, 62 insertions(+), 3 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 20f9e2a9814f..0bc91b30b031 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -885,4 +885,8 @@ void __check_heap_object(const void *ptr, unsigned long n, } #endif +#ifdef CONFIG_SLUB_DEBUG +void skip_orig_size_check(struct kmem_cache *s, const void *object); +#endif + #endif /* MM_SLAB_H */ diff --git a/mm/slab_common.c b/mm/slab_common.c index 8e13e3aac53f..5106667d6adb 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1001,6 +1001,10 @@ size_t __ksize(const void *object) return folio_size(folio); } +#ifdef CONFIG_SLUB_DEBUG + skip_orig_size_check(folio_slab(folio)->slab_cache, object); +#endif + return slab_ksize(folio_slab(folio)->slab_cache); } diff --git a/mm/slub.c b/mm/slub.c index f523601d3fcf..2f0302136604 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -812,12 +812,27 @@ static inline void set_orig_size(struct kmem_cache *s, if (!slub_debug_orig_size(s)) return; +#ifdef CONFIG_KASAN_GENERIC + /* + * KASAN could save its free meta data in the start part of object + * area, so skip the redzone check if kasan's meta data size is + * bigger enough to possibly overlap with kmalloc redzone + */ + if (s->kasan_info.free_meta_size_in_object * 2 >= s->object_size) + orig_size = s->object_size; +#endif + p += get_info_end(s); p += sizeof(struct track) * 2; *(unsigned int *)p = orig_size; } +void skip_orig_size_check(struct kmem_cache *s, const void *object) +{ + set_orig_size(s, (void *)object, s->object_size); +} + static unsigned int get_orig_size(struct kmem_cache *s, void *object) { void *p = kasan_reset_tag(object); @@ -949,13 +964,34 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct slab *slab, static void init_object(struct kmem_cache *s, void *object, u8 val) { u8 *p = kasan_reset_tag(object); + unsigned int orig_size = s->object_size; - if (s->flags & SLAB_RED_ZONE) + if (s->flags & SLAB_RED_ZONE) { memset(p - s->red_left_pad, val, s->red_left_pad); + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { + unsigned int zone_start; + + orig_size = get_orig_size(s, object); + zone_start = orig_size; + + if (!freeptr_outside_object(s)) + zone_start = max_t(unsigned int, orig_size, + s->offset + sizeof(void *)); + + /* + * Redzone the extra allocated space by kmalloc + * than requested. + */ + if (zone_start < s->object_size) + memset(p + zone_start, val, + s->object_size - zone_start); + } + } + if (s->flags & __OBJECT_POISON) { - memset(p, POISON_FREE, s->object_size - 1); - p[s->object_size - 1] = POISON_END; + memset(p, POISON_FREE, orig_size - 1); + p[orig_size - 1] = POISON_END; } if (s->flags & SLAB_RED_ZONE) @@ -1103,6 +1139,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab, { u8 *p = object; u8 *endobject = object + s->object_size; + unsigned int orig_size; if (s->flags & SLAB_RED_ZONE) { if (!check_bytes_and_report(s, slab, object, "Left Redzone", @@ -1112,6 +1149,20 @@ static int check_object(struct kmem_cache *s, struct slab *slab, if (!check_bytes_and_report(s, slab, object, "Right Redzone", endobject, val, s->inuse - s->object_size)) return 0; + + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { + orig_size = get_orig_size(s, object); + + if (!freeptr_outside_object(s)) + orig_size = max_t(unsigned int, orig_size, + s->offset + sizeof(void *)); + if (s->object_size > orig_size && + !check_bytes_and_report(s, slab, object, + "kmalloc Redzone", p + orig_size, + val, s->object_size - orig_size)) { + return 0; + } + } } else { if ((s->flags & SLAB_POISON) && s->object_size < s->inuse) { check_bytes_and_report(s, slab, p, "Alignment padding",