From patchwork Wed Jun 17 19:53:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 11610503 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 61C3F912 for ; Wed, 17 Jun 2020 19:53:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1FDF3217BA for ; Wed, 17 Jun 2020 19:53:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="QSpKhHkY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1FDF3217BA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 18F0D6B0002; Wed, 17 Jun 2020 15:53:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 140CA6B0005; Wed, 17 Jun 2020 15:53:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 056936B0006; Wed, 17 Jun 2020 15:53:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0053.hostedemail.com [216.40.44.53]) by kanga.kvack.org (Postfix) with ESMTP id E18096B0002 for ; Wed, 17 Jun 2020 15:53:56 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 59C66180AD811 for ; Wed, 17 Jun 2020 19:53:56 +0000 (UTC) X-FDA: 76939754472.29.wash72_2d1525f26e0a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 2E5D218086E24 for ; Wed, 17 Jun 2020 19:53:56 +0000 (UTC) X-Spam-Summary: 10,1,0,bc469f0832c5ad6b,d41d8cd98f00b204,keescook@chromium.org,,RULES_HIT:2:41:69:355:379:404:541:800:960:965:966:967:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:2196:2198:2199:2200:2393:2525:2559:2563:2682:2685:2731:2859:2894:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3871:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4049:4120:4250:4321:4385:4390:4395:5007:6119:6261:6653:6742:7875:7903:8603:8957:9025:9040:9389:9592:10004:11026:11473:11658:11914:12043:12114:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13132:13231:13894:14394:21080:21347:21444:21450:21451:21611:21627:21796:21972:21984:21990:30029:30036:30054:30062:30069,0,RBL:209.85.214.194:@chromium.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: wash72_2d1525f26e0a X-Filterd-Recvd-Size: 9353 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Jun 2020 19:53:55 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id n2so1411192pld.13 for ; Wed, 17 Jun 2020 12:53:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=T2AA+B7WTCHPufkuKtegBkYj399ncB9pbwg+pVpWalo=; b=QSpKhHkYTeIsAvtrVkrVdqH5eme/TWjK6rMA3wN22Mjyssib9eqwUoVqGENyNW7ZHV KF58wNwC2Pk9H2hdY1k3IlNN1R28CI64UcAxaQENlKNhf2JIi60gzWSJAVvijdIBnsPj +okd/zaOtaETb5fQvFLe6SfmoK8eOvH/PB3xQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=T2AA+B7WTCHPufkuKtegBkYj399ncB9pbwg+pVpWalo=; b=uD+uGFv18LszmgqLf71DjScggHBFDfuvsa8EGP4soAuKi5qi4JccfWyI9CvG0vP59J SFOJEEjI2qYEn4x6FCCOhbsep9fcbxfP1qc/Pm7MvxN5inC4vHDtdsIz0ZXIAbLDdnxj Htli3q04wVIOeom6ezJt6zzVKH/c4Q5BNIZUTcw8kpVPp7/RvKt/7x1Ic3MT6qofJxTs hB4+dvK5yDfXcwHhRMMcbGnsdCJylmelPLoJcMGInwlEzs3T9O0UNrtm7D63jhbHcLEZ /Y1Dcf0C1unRMFfnTiYiwmkw8FS2NzaOfBe6lxbGr0DQNEKigOX0Tqs9Z1PJHq63wTsj DExA== X-Gm-Message-State: AOAM533JzlzhnuwExp2VJkkYYTqvKwckCuanThFJBi+LCdEu/XK/Mi9L ZYrSvpSHCr0+k0Rihfk574mopw== X-Google-Smtp-Source: ABdhPJxG0haUmkR0G3K6PcNOeyCXNYo8ctIyWAWnsUhurpfESipBK8kIZoTj1Un2WF6jq1Xb+4/24Q== X-Received: by 2002:a17:90a:8c01:: with SMTP id a1mr661937pjo.94.1592423634696; Wed, 17 Jun 2020 12:53:54 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id i22sm627663pfo.92.2020.06.17.12.53.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Jun 2020 12:53:53 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Vlastimil Babka , Roman Gushchin , Christoph Lameter , Alexander Popov , Pekka Enberg , David Rientjes , Joonsoo Kim , vinmenon@codeaurora.org, Matthew Garrett , Jann Horn , Vijayanand Jitta , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB and SLOB Date: Wed, 17 Jun 2020 12:53:48 -0700 Message-Id: <20200617195349.3471794-2-keescook@chromium.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200617195349.3471794-1-keescook@chromium.org> References: <20200617195349.3471794-1-keescook@chromium.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2E5D218086E24 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Include SLAB and SLOB caches when performing kmem_cache pointer verification. A defense against such corruption[1] should be applied to all the allocators. With this added, the "SLAB_FREE_CROSS" and "SLAB_FREE_PAGE" LKDTM tests now pass on SLAB: lkdtm: Performing direct entry SLAB_FREE_CROSS lkdtm: Attempting cross-cache slab free ... ------------[ cut here ]------------ cache_from_obj: Wrong slab cache. lkdtm-heap-b but object is from lkdtm-heap-a WARNING: CPU: 2 PID: 2195 at mm/slab.h:530 kmem_cache_free+0x8d/0x1d0 ... lkdtm: Performing direct entry SLAB_FREE_PAGE lkdtm: Attempting non-Slab slab free ... ------------[ cut here ]------------ virt_to_cache: Object is not a Slab page! WARNING: CPU: 1 PID: 2202 at mm/slab.h:489 kmem_cache_free+0x196/0x1d0 Additionally clean up neighboring Kconfig entries for clarity, readability, and redundant option removal. [1] https://github.com/ThomasKing2014/slides/raw/master/Building%20universal%20Android%20rooting%20with%20a%20type%20confusion%20vulnerability.pdf Fixes: 598a0717a816 ("mm/slab: validate cache membership under freelist hardening") Signed-off-by: Kees Cook --- init/Kconfig | 8 ++++---- mm/slab.c | 8 -------- mm/slab.h | 31 +++++++++++++++++++++++++++++++ mm/slub.c | 25 +------------------------ 4 files changed, 36 insertions(+), 36 deletions(-) diff --git a/init/Kconfig b/init/Kconfig index a46aa8f3174d..b5e616e5fd2f 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1885,9 +1885,8 @@ config SLAB_MERGE_DEFAULT command line. config SLAB_FREELIST_RANDOM - default n + bool "Randomize slab freelist" depends on SLAB || SLUB - bool "SLAB freelist randomization" help Randomizes the freelist order used on creating new pages. This security feature reduces the predictability of the kernel slab @@ -1895,12 +1894,13 @@ config SLAB_FREELIST_RANDOM config SLAB_FREELIST_HARDENED bool "Harden slab freelist metadata" - depends on SLUB help Many kernel heap attacks try to target slab cache metadata and other infrastructure. This options makes minor performance sacrifices to harden the kernel slab allocator against common - freelist exploit methods. + freelist exploit methods. Some slab implementations have more + sanity-checking than others. This option is most effective with + CONFIG_SLUB. config SHUFFLE_PAGE_ALLOCATOR bool "Page allocator randomization" diff --git a/mm/slab.c b/mm/slab.c index 6134c4c36d4c..9350062ffc1a 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3672,14 +3672,6 @@ void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller) } EXPORT_SYMBOL(__kmalloc_track_caller); -static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) -{ - if (memcg_kmem_enabled()) - return virt_to_cache(x); - else - return s; -} - /** * kmem_cache_free - Deallocate an object * @cachep: The cache the allocation was from. diff --git a/mm/slab.h b/mm/slab.h index a2696d306b62..090d8b8e7bf8 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -467,6 +467,20 @@ static inline void memcg_link_cache(struct kmem_cache *s, #endif /* CONFIG_MEMCG_KMEM */ +#ifdef CONFIG_SLUB_DEBUG +extern inline int kmem_cache_debug_flags(struct kmem_cache *s, + slab_flags_t flags); +extern inline void print_tracking(struct kmem_cache *s, void *object); +#else +static inline int kmem_cache_debug_flags(struct kmem_cache *s, + slab_flags_t flags) +{ + return 0; +} +static inline void print_tracking(struct kmem_cache *s, void *object) +{ } +#endif + static inline struct kmem_cache *virt_to_cache(const void *obj) { struct page *page; @@ -503,6 +517,23 @@ static __always_inline void uncharge_slab_page(struct page *page, int order, memcg_uncharge_slab(page, order, s); } +static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) +{ + struct kmem_cache *cachep; + + if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) && + !memcg_kmem_enabled() && + !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) + return s; + + cachep = virt_to_cache(x); + if (WARN(cachep && !slab_equal_or_root(cachep, s), + "%s: Wrong slab cache. %s but object is from %s\n", + __func__, s->name, cachep->name)) + print_tracking(cachep, x); + return cachep; +} + static inline size_t slab_ksize(const struct kmem_cache *s) { #ifndef CONFIG_SLUB diff --git a/mm/slub.c b/mm/slub.c index f7a1d8537674..cd4891448db4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -120,7 +120,6 @@ DEFINE_STATIC_KEY_TRUE(slub_debug_enabled); #else DEFINE_STATIC_KEY_FALSE(slub_debug_enabled); #endif -#endif /* * Returns true if any of the specified slub_debug flags is enabled for the @@ -129,12 +128,11 @@ DEFINE_STATIC_KEY_FALSE(slub_debug_enabled); */ static inline int kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t flags) { -#ifdef CONFIG_SLUB_DEBUG if (static_branch_unlikely(&slub_debug_enabled)) return s->flags & flags; -#endif return 0; } +#endif static inline int kmem_cache_debug(struct kmem_cache *s) { @@ -1524,10 +1522,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct page *page, { return false; } - -static void print_tracking(struct kmem_cache *s, void *object) -{ -} #endif /* CONFIG_SLUB_DEBUG */ /* @@ -3179,23 +3173,6 @@ void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) } #endif -static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) -{ - struct kmem_cache *cachep; - - if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) && - !memcg_kmem_enabled() && - !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) - return s; - - cachep = virt_to_cache(x); - if (WARN(cachep && !slab_equal_or_root(cachep, s), - "%s: Wrong slab cache. %s but object is from %s\n", - __func__, s->name, cachep->name)) - print_tracking(cachep, x); - return cachep; -} - void kmem_cache_free(struct kmem_cache *s, void *x) { s = cache_from_obj(s, x);