From patchwork Fri Oct 4 13:25:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11174489 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D5B6413BD for ; Fri, 4 Oct 2019 13:26:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A213F207FF for ; Fri, 4 Oct 2019 13:26:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SsSQMBTT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A213F207FF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E192E8E0003; Fri, 4 Oct 2019 09:26:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DC9BB6B000C; Fri, 4 Oct 2019 09:26:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE0428E0003; Fri, 4 Oct 2019 09:26:03 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id ADEE76B000A for ; Fri, 4 Oct 2019 09:26:03 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 3DD5E180AD801 for ; Fri, 4 Oct 2019 13:26:03 +0000 (UTC) X-FDA: 76006175406.24.smoke48_51e74c5d3b217 X-Spam-Summary: 2,0,0,5394a2fbbfc9db96,d41d8cd98f00b204,3auixxqykceosxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com,:akpm@linux-foundation.org:cl@linux.com:glider@google.com:thibaut@sautereau.fr:keescook@chromium.org:labbott@redhat.com::kernel-hardening@lists.openwall.com,RULES_HIT:41:152:355:379:541:800:960:965:966:973:988:989:1260:1277:1313:1314:1345:1431:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2894:2898:3138:3139:3140:3141:3142:3152:3354:3865:3867:3868:3870:3871:3872:3874:4321:4385:4390:4395:5007:6120:6261:6653:7901:7903:8603:8660:9969:10004:10400:11026:11658:11914:12043:12296:12297:12438:12555:12679:12895:13148:13221:13229:13230:13255:13846:13870:14181:14394:14659:14721:21067:21080:21444:21451:21627:30054:30062:30070,0,RBL:209.85.221.201:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.175.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSB L:none,C X-HE-Tag: smoke48_51e74c5d3b217 X-Filterd-Recvd-Size: 5338 Received: from mail-vk1-f201.google.com (mail-vk1-f201.google.com [209.85.221.201]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Fri, 4 Oct 2019 13:26:02 +0000 (UTC) Received: by mail-vk1-f201.google.com with SMTP id k132so2626804vka.5 for ; Fri, 04 Oct 2019 06:26:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=GVHCzrg5aWfdjBlvLj9MYI2lYCOnNMg5uvSXdM3+TzE=; b=SsSQMBTTYizI5qI0BSu+IiCrsXqCqRd81yCDa8fzJLCceaGzy95YWXA+A9Eu43vK1X KFwr7w2uflGE5E14QZs0iYRSxN+6BdqajQTs4joicH9ItFxIWML7+AirG2yJMe5Q747k Ip6VqxqQGOuqBLFt+Ti9gGWrvYb6OoUuEtJ++/q/5NkFaAGNaqmvGxdcbV4p6UVE69Tg 0GnE0nmvDxIst3FAwpsHIjmbH2cw6Hwq0oEcG1+ZnZg/h+kvvy1TpeV+1O+tlQsbOB0n 310hvdgqkmq385JYCzwb6omFGEqvIRDtFG2MxNN4kvD8ukPQzT174pVv0HpOHhSyidrO +bIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=GVHCzrg5aWfdjBlvLj9MYI2lYCOnNMg5uvSXdM3+TzE=; b=m8HJO1Fx1D+yhyTUfHkaDeu/LGUeU71TI31pjmOxeuWnR2hBlIOiWnOPJmPBnr4i5Y WdAdi0QB2npIIChkP5jbClNs0ovN/8PdJWspqdm4vpIyXT08A/cYPJmiILgT6kO9Mx0I QSXoGjqIDNjARfEx8Qq4+I2HesHDUASGGsnmXhJe5Wo7RzVMdGeUX+CW9uHjegmb4SDx E6Z3sh0Tyl7ya33XzDlD74fc/isL5e9sx+w7oEdFBzftCjzduQQOSd6+ryxZiyC3/DFW GO++8CLEN5XQtqYcArnweossrnwmsnzrKYKNZwJwTHTsxQUz2XmEpZy+oix0RuNPTRcF jOEQ== X-Gm-Message-State: APjAAAXyqnToTrtNwurgOJnT4N7AqPdavUOyAXAAVY/tD0kfbm5kYtpk UaKu6n6zFQwOwQVky7hpaki+ljd1pok= X-Google-Smtp-Source: APXvYqzH2Bqrc6KpwbpAS0ASdf5kHMXJcm41OrYbCmvSOjgQMNFVuYuFC+kQDrRyPbcaOawdhCgwTzByuTU= X-Received: by 2002:a05:6122:9e:: with SMTP id r30mr7847880vka.10.1570195561944; Fri, 04 Oct 2019 06:26:01 -0700 (PDT) Date: Fri, 4 Oct 2019 15:25:54 +0200 Message-Id: <20191004132555.202973-1-glider@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.23.0.581.g78d2f28ef7-goog Subject: [PATCH v1 1/2] mm: slub: init_on_free=1 should wipe freelist ptr for bulk allocations From: Alexander Potapenko To: Andrew Morton , Christoph Lameter Cc: Alexander Potapenko , Thibaut Sautereau , Kees Cook , Laura Abbott , linux-mm@kvack.org, kernel-hardening@lists.openwall.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: slab_alloc_node() already zeroed out the freelist pointer if init_on_free was on. Thibaut Sautereau noticed that the same needs to be done for kmem_cache_alloc_bulk(), which performs the allocations separately. kmem_cache_alloc_bulk() is currently used in two places in the kernel, so this change is unlikely to have a major performance impact. SLAB doesn't require a similar change, as auto-initialization makes the allocator store the freelist pointers off-slab. Reported-by: Thibaut Sautereau Reported-by: Kees Cook Signed-off-by: Alexander Potapenko Fixes: 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options") To: Andrew Morton To: Christoph Lameter Cc: Laura Abbott Cc: linux-mm@kvack.org Cc: kernel-hardening@lists.openwall.com --- mm/slub.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 8834563cdb4b..fe90bed40eb3 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2669,6 +2669,16 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, return p; } +/* + * If the object has been wiped upon free, make sure it's fully initialized by + * zeroing out freelist pointer. + */ +static __always_inline maybe_wipe_obj_freeptr(struct kmem_cache *s, void *obj) +{ + if (unlikely(slab_want_init_on_free(s)) && obj) + memset((void *)((char *)obj + s->offset), 0, sizeof(void *)); +} + /* * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc) * have the fastpath folded into their functions. So no function call @@ -2757,12 +2767,8 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, prefetch_freepointer(s, next_object); stat(s, ALLOC_FASTPATH); } - /* - * If the object has been wiped upon free, make sure it's fully - * initialized by zeroing out freelist pointer. - */ - if (unlikely(slab_want_init_on_free(s)) && object) - memset(object + s->offset, 0, sizeof(void *)); + + maybe_wipe_obj_freeptr(s, object); if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) memset(object, 0, s->object_size); @@ -3176,10 +3182,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, goto error; c = this_cpu_ptr(s->cpu_slab); + maybe_wipe_obj_freeptr(s, p[i]); + continue; /* goto for-loop */ } c->freelist = get_freepointer(s, object); p[i] = object; + maybe_wipe_obj_freeptr(s, p[i]); } c->tid = next_tid(c->tid); local_irq_enable(); From patchwork Fri Oct 4 13:25:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11174491 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 39BF713BD for ; Fri, 4 Oct 2019 13:26:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0627E207FF for ; Fri, 4 Oct 2019 13:26:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="u75M3iFy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0627E207FF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 342AA6B000A; Fri, 4 Oct 2019 09:26:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2F41A8E0005; Fri, 4 Oct 2019 09:26:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E32A6B000D; Fri, 4 Oct 2019 09:26:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id 01A586B000A for ; Fri, 4 Oct 2019 09:26:07 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 873D82DFB for ; Fri, 4 Oct 2019 13:26:07 +0000 (UTC) X-FDA: 76006175574.11.power60_5284bc40e874b X-Spam-Summary: 2,0,0,db8d49ed683264eb,d41d8cd98f00b204,3bkixxqykce8x2zuv8x55x2v.t532z4be-331crt1.58x@flex--glider.bounces.google.com,:akpm@linux-foundation.org:cl@linux.com:glider@google.com:keescook@chromium.org::kernel-hardening@lists.openwall.com,RULES_HIT:41:152:355:379:541:800:960:966:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3152:3353:3866:3867:3868:4385:5007:6261:6653:7903:8603:9969:10004:10400:11026:11473:11658:11914:12043:12114:12291:12296:12297:12438:12555:12895:13069:13311:13357:13846:14181:14394:14659:14721:21080:21444:21451:21627:30054,0,RBL:209.85.221.202:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.175.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: power60_5284bc40e874b X-Filterd-Recvd-Size: 4562 Received: from mail-vk1-f202.google.com (mail-vk1-f202.google.com [209.85.221.202]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Fri, 4 Oct 2019 13:26:06 +0000 (UTC) Received: by mail-vk1-f202.google.com with SMTP id 70so2617794vki.19 for ; Fri, 04 Oct 2019 06:26:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GWlYpnS2hNFrsmeeImnsTlnodR5FcAicQUn+gpn41So=; b=u75M3iFyHxY2l+lvcWD5ax+LgH7eSKoGg5Hq64aRsm4ffslEyEPzzpkQDRDgJuS3zJ wYFBb1iENWoGX7MR1muFp3riMwAPVpPCqEXqU880Q1tHkv66/ZabS1rxt0q3YHY8w7S9 7qtD2FEDbfjoRk7esXGfQQ8/8mVHfCaGBuZO/xYZc64AHsxdVQ2+4Pzeb0IZQ/WNhe9w KFcfSh4QJBFuzk7iTL3IIQIyQRLnAMBMxNwXGp89pJJyard6+wfzNhjmF9zk4EjqIPOq LCk+/U6xR1me4dLTDdbfdmADOD2gbSTMnFIBx9jM6K507PHcd8YHXP/MmcJyFTlGbNo8 nRFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GWlYpnS2hNFrsmeeImnsTlnodR5FcAicQUn+gpn41So=; b=SO4i0qQDxzvlZ6NUZ2HUxzpim4k6JHbw3Uu4mYVN/dSjXJV7dwUOIheTCEOfwCdvIR /Md0IYVDBqEvsVdtbRCiPaptZu59H4HzM4dvWDkjMp6qKQQdVUjPKavXKDI+nh6/AIdr QbHqOHyViHy7A4Eai8BIctGPQ0zIYcS5Fyhc/sVy7zXWgTxANhS30MrlkO/HmFRHfJCp dcDyaf2bPB0rpGlpvRLf9s7vUcX8deHJScqMMBBlvlB+9vPDDk9Ab2KneMqdXKwXPcXI ZvDOLBAGeoaR45iLE7U8NoDKgd203uEIz6SF6QU5RM6amPMFL31R4ExN1aPceVja3vBa oniQ== X-Gm-Message-State: APjAAAUJ5nQ8hD8VZz8DMZFq1u17kc8wtqRPDpKwEMQcjNXX3leqtRYs 9JGlpJBL6d9VpiCncdGqD4YrriTA+uk= X-Google-Smtp-Source: APXvYqyDRmK5S8c3gaI8EpnXE6TNfIWQuNU8VFYCXMa0lctyNLV9CLLAW218TuV0cLdfpX30HrNaF0j643E= X-Received: by 2002:ac5:cc4a:: with SMTP id l10mr2655269vkm.60.1570195566177; Fri, 04 Oct 2019 06:26:06 -0700 (PDT) Date: Fri, 4 Oct 2019 15:25:55 +0200 In-Reply-To: <20191004132555.202973-1-glider@google.com> Message-Id: <20191004132555.202973-2-glider@google.com> Mime-Version: 1.0 References: <20191004132555.202973-1-glider@google.com> X-Mailer: git-send-email 2.23.0.581.g78d2f28ef7-goog Subject: [PATCH v1 2/2] lib/test_meminit: add a kmem_cache_alloc_bulk() test From: Alexander Potapenko To: Andrew Morton , Christoph Lameter Cc: Alexander Potapenko , Kees Cook , linux-mm@kvack.org, kernel-hardening@lists.openwall.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make sure allocations from kmem_cache_alloc_bulk()/kmem_cache_free_bulk() are properly initialized. Signed-off-by: Alexander Potapenko Cc: Kees Cook To: Andrew Morton To: Christoph Lameter Cc: linux-mm@kvack.org Cc: kernel-hardening@lists.openwall.com --- lib/test_meminit.c | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/lib/test_meminit.c b/lib/test_meminit.c index 9729f271d150..9742e5cb853a 100644 --- a/lib/test_meminit.c +++ b/lib/test_meminit.c @@ -297,6 +297,32 @@ static int __init do_kmem_cache_rcu_persistent(int size, int *total_failures) return 1; } +static int __init do_kmem_cache_size_bulk(int size, int *total_failures) +{ + struct kmem_cache *c; + int i, iter, maxiter = 1024; + int num, bytes; + bool fail = false; + void *objects[10]; + + c = kmem_cache_create("test_cache", size, size, 0, NULL); + for (iter = 0; (iter < maxiter) && !fail; iter++) { + num = kmem_cache_alloc_bulk(c, GFP_KERNEL, ARRAY_SIZE(objects), + objects); + for (i = 0; i < num; i++) { + bytes = count_nonzero_bytes(objects[i], size); + if (bytes) + fail = true; + fill_with_garbage(objects[i], size); + } + + if (num) + kmem_cache_free_bulk(c, num, objects); + } + *total_failures += fail; + return 1; +} + /* * Test kmem_cache allocation by creating caches of different sizes, with and * without constructors, with and without SLAB_TYPESAFE_BY_RCU. @@ -318,6 +344,7 @@ static int __init test_kmemcache(int *total_failures) num_tests += do_kmem_cache_size(size, ctor, rcu, zero, &failures); } + num_tests += do_kmem_cache_size_bulk(size, &failures); } REPORT_FAILURES_IN_FN(); *total_failures += failures;