From patchwork Fri Jun 1 11:53:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 10443171 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CFB93603D7 for ; Fri, 1 Jun 2018 11:53:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C001222B26 for ; Fri, 1 Jun 2018 11:53:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B46A126E79; Fri, 1 Jun 2018 11:53:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3F15E26E4A for ; Fri, 1 Jun 2018 11:53:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2DD3D6B0007; Fri, 1 Jun 2018 07:53:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 28BB66B0008; Fri, 1 Jun 2018 07:53:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12ED36B000A; Fri, 1 Jun 2018 07:53:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f200.google.com (mail-pf0-f200.google.com [209.85.192.200]) by kanga.kvack.org (Postfix) with ESMTP id C106C6B0007 for ; Fri, 1 Jun 2018 07:53:40 -0400 (EDT) Received: by mail-pf0-f200.google.com with SMTP id e7-v6so14346564pfi.8 for ; Fri, 01 Jun 2018 04:53:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id; bh=/bDcoFfKSBhnDbG1MJuswE4mDZVEdyjnSM8y1hOMrFs=; b=E2+DVJCxKW+vV9er2DmuH3nRSojGytnlLbd0AwAt4OE1PylvKCcogjfb3L5llzXRSC wwMMox+icS/jHoBAiFjfoXDQqJxgC5StPkbGhnW+8L6JegbLB65GJlRGMAX5CHlLoo0W nhM40+3oMfnZK/V+b/n/6+y0T9kRFD07CeiYwWBja4q1gJ6b8+5eOOoXGpsbMvuv66yr LO0ZenMrgavF+qyiMPXaidBgnVTu8h7U+yecpG96tXbTnGvOH56Zra2GK93SE5moUagu BYPLkhz40ZLFdpBdb0cZnpF8oupHfEbh4y03+qPRfrIuDmhqepsrlibN97MOyVZYZb2V 9ykQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of mstsxfx@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org X-Gm-Message-State: ALKqPwcpJho4vba6krT0CzBKo2L6uuGl0FDRbH5rjlxdfp8wwb7RhvVr K1Pwrb7fLdOfrxlXeR1bgYYDrl83Hb8riCLhupJoSwNoqkrgj8JBlYNYxt4hPYTZRUqF6PUnd0s UGoeVQblMT9C37kx/EGQ3Uja5ClDoL5YeR1uj9KafQ7BHktwOFsTKYTwDyHdY+k0HID5Z4DAMWA 33STERzqATk9lhYYQGUzbDci5nWTc3B3MOO+nZT6wuXdyFTNM07o90aXoeGl9la2BTVP0/juUza DnXqy1rKvRDDZhjSsL6ioU2/S3q64yDXmJn2Gtqx3qH8KVkFCXD8PZWbUXrEKEatCm/ntA42x8A 1fdPrKONRRe673tBB7rhPgW0/t4Nw8Cdrrz0niQXzIK1nWYPZbEg9cMo8xgHxNsZjqu0jq6Qmw= = X-Received: by 2002:a63:6b43:: with SMTP id g64-v6mr8591577pgc.337.1527854020427; Fri, 01 Jun 2018 04:53:40 -0700 (PDT) X-Received: by 2002:a63:6b43:: with SMTP id g64-v6mr8591516pgc.337.1527854019233; Fri, 01 Jun 2018 04:53:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527854019; cv=none; d=google.com; s=arc-20160816; b=DZwgo5VNtbjiMBYrj2a+lc5/skbk7zxBRd+hQI0hP4nEkW1M9GMjkdh+QExjNYzBHC +QnQbQvls90ZskgHTe5WA5hX2qKC0n6JHQOjJC/hBhg/b1zeR/nbilyx8K93cg7YbHli 0dEcfWQCWC8m0MSfCHSjVffIamPYhDWnXpRYOWDpPjNzxe5vcy4Xg/uxWRI4+Ed1I07M 93OUJh0Xq4Be+ZXDC9Z/3vAMJ9ZxwbWT4rM9MDmWI32j7XPdY66wE90v1TGxi8IShjme d/U69IAbdR/8EV4hVi7sto/iPvf1WGHN2KyjiR0XrV1IncXQtKosuVRuXigO7hE06lHx VoBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from:arc-authentication-results; bh=/bDcoFfKSBhnDbG1MJuswE4mDZVEdyjnSM8y1hOMrFs=; b=ZFOh5HjiBebmkTHIetUmWXX3WTVMcpuO9BLl5EAUne8tN0/ht9Fxh+jMYzsi/eDKyB GDN/JwA+InmbSeDwrr3l+ASCKEd9LpC3fl2uB9on7zgzMU/KtJVzDS8gcZTitTnmd3XP Mor9OE5uW3RHW2yzuGGZP595NWhcTC/b4tSmVBFa7/QU+aK6ESBQb4P0k627nW6l5phN N7J24EBNA3ztYeSdkrWACCBCzeDTgyGKPmgU86+4W6EmaOdxJC95W8uI0vhtIlDuptlL ZI/LJz6/KDcq8+yw7rXEeYBpCfIjU+3isMVpcpMJndmjFptchrtNbAdGiTYcofombRN7 YhdQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of mstsxfx@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id s9-v6sor3716809pgc.260.2018.06.01.04.53.38 for (Google Transport Security); Fri, 01 Jun 2018 04:53:39 -0700 (PDT) Received-SPF: pass (google.com: domain of mstsxfx@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; spf=pass (google.com: domain of mstsxfx@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org X-Google-Smtp-Source: ADUXVKIOjwVX6kKF3zcDWj2JTa+CNd71EIyrspzvqLnXfTb1k1gzsMbZQh7vYw/jXGeEz+cl1pZQeQ== X-Received: by 2002:a65:4cc3:: with SMTP id n3-v6mr8597940pgt.98.1527854018790; Fri, 01 Jun 2018 04:53:38 -0700 (PDT) Received: from tiehlicka.suse.cz (prg-ext-pat.suse.com. [213.151.95.130]) by smtp.gmail.com with ESMTPSA id z13-v6sm61893998pfe.77.2018.06.01.04.53.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 01 Jun 2018 04:53:37 -0700 (PDT) From: Michal Hocko To: Andrew Morton Cc: Linus Torvalds , Tom Herbert , , LKML , Michal Hocko Subject: [PATCH] mm: kvmalloc does not fallback to vmalloc for incompatible gfp flags Date: Fri, 1 Jun 2018 13:53:29 +0200 Message-Id: <20180601115329.27807-1-mhocko@kernel.org> X-Mailer: git-send-email 2.17.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Michal Hocko kvmalloc warned about incompatible gfp_mask to catch abusers (mostly GFP_NOFS) with an intention that this will motivate authors of the code to fix those. Linus argues that this just motivates people to do even more hacks like if (gfp == GFP_KERNEL) kvmalloc else kmalloc I haven't seen this happening much (Linus pointed to bucket_lock special cases an atomic allocation but my git foo hasn't found much more) but it is true that we can grow those in future. Therefore Linus suggested to simply not fallback to vmalloc for incompatible gfp flags and rather stick with the kmalloc path. Requested-by: Linus Torvalds Signed-off-by: Michal Hocko --- Hi Andrew, for more context. Linus has pointed out [1] that our (well mine) insisting on GFP_KERNEL compatible gfp flags for kvmalloc* can actually lead to a worse code because people will work around the restriction. So this patch allows kvmalloc to be more permissive and silently skip vmalloc path for incompatible gfp flags. This will not help my original plan to enforce people to think about GFP_NOFS usage more deeply but I can live with that obviously... alloc_bucket_spinlocks is the only place I could find which special cases kvmalloc based on the gfp mask. [1] http://lkml.kernel.org/r/CA+55aFxvNCEBQsxfr=yL3jgxiC8M8wY2MHwVBH+T8qSWyP-WPg@mail.gmail.com lib/bucket_locks.c | 5 +---- mm/util.c | 6 ++++-- 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/lib/bucket_locks.c b/lib/bucket_locks.c index 266a97c5708b..ade3ce6c4af6 100644 --- a/lib/bucket_locks.c +++ b/lib/bucket_locks.c @@ -30,10 +30,7 @@ int alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *locks_mask, } if (sizeof(spinlock_t) != 0) { - if (gfpflags_allow_blocking(gfp)) - tlocks = kvmalloc(size * sizeof(spinlock_t), gfp); - else - tlocks = kmalloc_array(size, sizeof(spinlock_t), gfp); + tlocks = kvmalloc_array(size, sizeof(spinlock_t), gfp); if (!tlocks) return -ENOMEM; for (i = 0; i < size; i++) diff --git a/mm/util.c b/mm/util.c index 45fc3169e7b0..c6586c146995 100644 --- a/mm/util.c +++ b/mm/util.c @@ -391,7 +391,8 @@ EXPORT_SYMBOL(vm_mmap); * __GFP_RETRY_MAYFAIL is supported, and it should be used only if kmalloc is * preferable to the vmalloc fallback, due to visible performance drawbacks. * - * Any use of gfp flags outside of GFP_KERNEL should be consulted with mm people. + * Please note that any use of gfp flags outside of GFP_KERNEL is careful to not + * fall back to vmalloc. */ void *kvmalloc_node(size_t size, gfp_t flags, int node) { @@ -402,7 +403,8 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node) * vmalloc uses GFP_KERNEL for some internal allocations (e.g page tables) * so the given set of flags has to be compatible. */ - WARN_ON_ONCE((flags & GFP_KERNEL) != GFP_KERNEL); + if ((flags & GFP_KERNEL) != GFP_KERNEL) + return kmalloc_node(size, flags, node); /* * We want to attempt a large physically contiguous block first because