From patchwork Mon Oct 7 09:16:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11177025 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C521776 for ; Mon, 7 Oct 2019 09:16:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9042C2173B for ; Mon, 7 Oct 2019 09:16:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lHWyy8sO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9042C2173B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C3ECD8E0005; Mon, 7 Oct 2019 05:16:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BEF358E0003; Mon, 7 Oct 2019 05:16:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ADD5E8E0005; Mon, 7 Oct 2019 05:16:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0201.hostedemail.com [216.40.44.201]) by kanga.kvack.org (Postfix) with ESMTP id 8819F8E0003 for ; Mon, 7 Oct 2019 05:16:13 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 2C849824CA3F for ; Mon, 7 Oct 2019 09:16:13 +0000 (UTC) X-FDA: 76016432226.09.force30_7049d71bcfe3a X-Spam-Summary: 2,0,0,0957b75f3b5ebcd0,d41d8cd98f00b204,3wqkbxqykclwinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com,:akpm@linux-foundation.org:cl@linux.com:glider@google.com:thibaut@sautereau.fr:keescook@chromium.org:labbott@redhat.com::kernel-hardening@lists.openwall.com,RULES_HIT:41:152:355:379:541:800:960:965:966:973:988:989:1260:1277:1313:1314:1345:1431:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2894:2898:3138:3139:3140:3141:3142:3152:3354:3865:3867:3868:3870:3871:3872:3874:4321:4385:4390:4395:5007:6120:6261:6653:7875:7901:7903:8603:8660:9969:10004:10400:11026:11658:11914:12043:12296:12297:12438:12555:12679:12895:13148:13221:13229:13230:13255:13846:13870:14181:14394:14659:14721:21067:21080:21444:21451:21627:30054:30062:30064:30070,0,RBL:209.85.128.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSB L:0,DNSB X-HE-Tag: force30_7049d71bcfe3a X-Filterd-Recvd-Size: 5451 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Oct 2019 09:16:11 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id j125so5895051wmj.6 for ; Mon, 07 Oct 2019 02:16:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=45HqM66XkDTGJdcHH0Mj9B4/t81i7fjp0X54i65fB7c=; b=lHWyy8sO/U0UxHqkJyz7HbZLur5RvuNog2JLwrY/dzOidpSdBVR3SZFZgElA/CeNm+ Udw/XbbjXjT7Fd8WZcUXYhqqZgCbRnHxePCLEcF0uG7tEgsLxge/b1vD98S7O7yrj0rl QtrzzwvqDW+YJGJ4I7F3Y76BB+DHfaSYyXlcxEHR3CaXRS3fxjan+Jvb4o+PGHZTLNHa IC2G0Ggw+XsQg4FdnioaUXwoujUoDVIMYJHDDDxXtzu8VnQOQoax/Ag9fO1wG/XIlWx8 m5xz8oKGBbRF6mkHV7dsIGDrgty6uZKtF8gtjKyXAJ5yUKC4qE3kQr/lZUsGIkYugVvJ elqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=45HqM66XkDTGJdcHH0Mj9B4/t81i7fjp0X54i65fB7c=; b=lsPE9NQl2jagIl88fEGCVfHyAI3AwCdgSA+F4/EzvizDGae2OJhqgOutgmElEUsohB THGyCsbAYuLj+jPW3y87vYV+z/9uUdFc7AhVYWD0a3OKd5NHWxQnwElKFA4jR9h22p0J j+QvxVfCD3tnWULJa4fUG46G7AivhKt961IfpJiSFXHIW6jDnoRfhBx4q1bxA/4X3DG4 9Nxwo/yOkt+Yx72tphKI6R+rTvD0IA2iPiV6xaLlbIAWuZbYSq6VYuTA1Q7iTzeu4nPG OzCHQBelGpuwnAz1qalXNfPuorKZETDUpVNCOjOvo6+BgGXgT68dfobowCHXMyVHJ/RN Yxcg== X-Gm-Message-State: APjAAAUV7sggrBjNUvysex/fBIH8uIf68sPvZRYtkUJ2Cf49Zu77bmOZ suaZaWvkMHinCWJ0IiOuQL1LmYOFUsQ= X-Google-Smtp-Source: APXvYqwOBauZkHh/db5vsa2IjQROpgcpgTYnXMyKeDSqPzsADDmRokwCWRrtixo/HVZouqJnm+09QamcFpY= X-Received: by 2002:a05:6000:43:: with SMTP id k3mr23345847wrx.84.1570439769928; Mon, 07 Oct 2019 02:16:09 -0700 (PDT) Date: Mon, 7 Oct 2019 11:16:04 +0200 Message-Id: <20191007091605.30530-1-glider@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.23.0.581.g78d2f28ef7-goog Subject: [PATCH 1/2] mm: slub: init_on_free=1 should wipe freelist ptr for bulk allocations From: glider@google.com To: Andrew Morton , Christoph Lameter Cc: Alexander Potapenko , Thibaut Sautereau , Kees Cook , Laura Abbott , linux-mm@kvack.org, kernel-hardening@lists.openwall.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: slab_alloc_node() already zeroed out the freelist pointer if init_on_free was on. Thibaut Sautereau noticed that the same needs to be done for kmem_cache_alloc_bulk(), which performs the allocations separately. kmem_cache_alloc_bulk() is currently used in two places in the kernel, so this change is unlikely to have a major performance impact. SLAB doesn't require a similar change, as auto-initialization makes the allocator store the freelist pointers off-slab. Reported-by: Thibaut Sautereau Reported-by: Kees Cook Signed-off-by: Alexander Potapenko Fixes: 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options") To: Andrew Morton To: Christoph Lameter Cc: Laura Abbott Cc: linux-mm@kvack.org Cc: kernel-hardening@lists.openwall.com --- v2: - added a missing return type to maybe_wipe_obj_freeptr() (spotted by kbuild test robot ) --- mm/slub.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 8834563cdb4b..89a69aaf58c4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2669,6 +2669,17 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, return p; } +/* + * If the object has been wiped upon free, make sure it's fully initialized by + * zeroing out freelist pointer. + */ +static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, + void *obj) +{ + if (unlikely(slab_want_init_on_free(s)) && obj) + memset((void *)((char *)obj + s->offset), 0, sizeof(void *)); +} + /* * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc) * have the fastpath folded into their functions. So no function call @@ -2757,12 +2768,8 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, prefetch_freepointer(s, next_object); stat(s, ALLOC_FASTPATH); } - /* - * If the object has been wiped upon free, make sure it's fully - * initialized by zeroing out freelist pointer. - */ - if (unlikely(slab_want_init_on_free(s)) && object) - memset(object + s->offset, 0, sizeof(void *)); + + maybe_wipe_obj_freeptr(s, object); if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) memset(object, 0, s->object_size); @@ -3176,10 +3183,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, goto error; c = this_cpu_ptr(s->cpu_slab); + maybe_wipe_obj_freeptr(s, p[i]); + continue; /* goto for-loop */ } c->freelist = get_freepointer(s, object); p[i] = object; + maybe_wipe_obj_freeptr(s, p[i]); } c->tid = next_tid(c->tid); local_irq_enable();